Browsed by
Tag: existential risk

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Gennady Stolyarov II
Zoltan Istvan
Max More
Ben Goertzel
Natasha Vita-More


Gennady Stolyarov II, Chairman of the United States Transhumanist Party, moderated this panel discussion, entitled “Advocating for the Future”, at RAAD Fest 2017 on August 11, 2017, in San Diego, California.

Watch it on YouTube here.

From left to right, the panelists are Zoltan Istvan, Gennady Stolyarov II, Max More, Ben Goertzel, and Natasha Vita-More. With these leading transhumanist luminaries, Mr. Stolyarov discussed subjects such as what the transhumanist movement will look like in 2030, artificial intelligence and sources of existential risk, gamification and the use of games to motivate young people to create a better future, and how to persuade large numbers of people to support life-extension research with at least the same degree of enthusiasm that they display toward the fight against specific diseases.

Learn more about RAAD Fest here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentations of Gennady Stolyarov II and Zoltan Istvan from the “Advocating for the Future” panel.

Gennady Stolyarov II Interviewed by Nikola Danaylov of Singularity.FM

Gennady Stolyarov II Interviewed by Nikola Danaylov of Singularity.FM

Gennady Stolyarov II
Nikola Danaylov


On March 31, 2018, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, was interviewed by Nikola Danaylov, a.k.a. Socrates, of Singularity.FM. A synopsis, audio download, and embedded video of the interview can be found on Singularity.FM here. You can also watch the YouTube video recording of the interview here.

Apparently this interview, nearly three hours in length, broke the record for the length of Nikola Danaylov’s in-depth, wide-ranging conversations on philosophy, politics, and the future.  The interview covered both some of Mr. Stolyarov’s personal work and ideas, such as the illustrated children’s book Death is Wrong, as well as the efforts and aspirations of the U.S. Transhumanist Party. The conversation also delved into such subjects as the definition of transhumanism, intelligence and morality, the technological Singularity or Singularities, health and fitness, and even cats. Everyone will find something of interest in this wide-ranging discussion.

Visit the U.S. Transhumanist Party website at http://transhumanist-party.org. To help advance the goals of the U.S. Transhumanist Party, as described in Mr. Stolyarov’s comments during the interview, become a member for free, no matter where you reside. Click here to fill out a membership application.

California Transhumanist Party Leadership Meeting – Presentation by Newton Lee and Discussion on Transhumanist Political Efforts

California Transhumanist Party Leadership Meeting – Presentation by Newton Lee and Discussion on Transhumanist Political Efforts

Newton Lee
Gennady Stolyarov II
Bobby Ridge
Charlie Kam


The California Transhumanist Party held its inaugural Leadership Meeting on January 27, 2018. Newton Lee, Chairman of the California Transhumanist Party and Education and Media Advisor of the U.S. Transhumanist Party,  outlined the three Core Ideals of the California Transhumanist Party (modified versions of the U.S. Transhumanist Party’s Core Ideals), the forthcoming book “Transhumanism: In the Image of Humans” – which he is curating and which will contain essays from leading transhumanist thinkers in a variety of realms, and possibilities for outreach, future candidates, and collaboration with the U.S. Transhumanist Party and Transhumanist Parties in other States. U.S. Transhumanist Party Chairman Gennady Stolyarov II contributed by providing an overview of the U.S. Transhumanist Party’s current operations and possibilities for running or endorsing candidates for office in the coming years.

Visit the website of the California Transhumanist Party:http://www.californiatranshumanistparty.org/index.html

Read the U.S. Transhumanist Party Constitution: http://transhumanist-party.org/constitution/

Become a member of the U.S. Transhumanist Party for free: http://transhumanist-party.org/membership/

(If you reside in California, this would automatically render you a member of the California Transhumanist Party.)

Transhumanism: Contemporary Issues – Presentation by Gennady Stolyarov II at VSIM:17 Conference in Ravda, Bulgaria

Transhumanism: Contemporary Issues – Presentation by Gennady Stolyarov II at VSIM:17 Conference in Ravda, Bulgaria

The New Renaissance Hat

G. Stolyarov II


Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, outlines common differences in perspectives in three key areas of contemporary transhumanist discourse: artificial intelligence, religion, and privacy. Mr. Stolyarov follows his presentation of each issue with the U.S. Transhumanist Party’s official stances, which endeavor to resolve commonplace debates and find new common ground in these areas. Watch the video of Mr. Stolyarov’s presentation here.

This presentation was delivered by Mr. Stolyarov on September 14, 2017, virtually to the Vanguard Scientific Instruments in Management 2017 (VSIM:17) Conference in Ravda, Bulgaria. Mr. Stolyarov was introduced by Professor Angel Marchev, Sr. –  the organizer of the conference and the U.S. Transhumanist Party’s Ambassador to Bulgaria.

After his presentation, Mr. Stolyarov answered questions from the audience on the subjects of the political orientation of transhumanism, what the institutional norms of a transhuman society would look like, and how best to advance transhumanist ideas.

Download and view the slides of Mr. Stolyarov’s presentation (with hyperlinks) here.

Listen to the Transhumanist March (March #12, Op. 78), composed by Mr. Stolyarov in 2014, here.

Visit the website of the U.S. Transhumanist Party here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form here.

Become a Foreign Ambassador for the U.S. Transhumanist Party. Apply here.

Congressman Lieu, Senator Markey Introduce the Restricting First Use of Nuclear Weapons Act of 2017 – Press Release by Congressman Ted Lieu & Senator Edward J. Markey

Congressman Lieu, Senator Markey Introduce the Restricting First Use of Nuclear Weapons Act of 2017 – Press Release by Congressman Ted Lieu & Senator Edward J. Markey

The New Renaissance Hat Congressman Ted Lieu (D-CA) & Senator Edward J. Markey (D-MA)
******************************

WashingtonToday, Congressman Ted W. Lieu (D | Los Angeles County) and Senator Edward J. Markey (D-Massachusetts) introduced the Restricting First Use of Nuclear Weapons Act of 2017.  This legislation would prohibit the President from launching a nuclear first strike without a declaration of war by Congress. The crucial issue of nuclear “first use” is more urgent than ever now that President Donald Trump has the power to launch a nuclear war at a moment’s notice.

Upon introduction of this legislation, Mr. Lieu issued the following statement:

“It is a frightening reality that the U.S. now has a Commander-in-Chief who has demonstrated ignorance of the nuclear triad, stated his desire to be ‘unpredictable’ with nuclear weapons, and as President-elect was making sweeping statements about U.S. nuclear policy over Twitter. Congress must act to preserve global stability by restricting the circumstances under which the U.S. would be the first nation to use a nuclear weapon. Our Founders created a system of checks and balances, and it is essential for that standard to be applied to the potentially civilization-ending threat of nuclear war. I am proud to introduce the Restricting First Use of Nuclear Weapons Act of 2017 with Sen. Markey to realign our nation’s nuclear weapons launch policy with the Constitution and work towards a safer world.”

Upon introduction of this legislation, Senator Markey issued the following statement:

“Nuclear war poses the gravest risk to human survival. Yet, President Trump has suggested that he would consider launching nuclear attacks against terrorists. Unfortunately, by maintaining the option of using nuclear weapons first in a conflict, U.S. policy provides him with that power. In a crisis with another nuclear-armed country, this policy drastically increases the risk of unintended nuclear escalation. Neither President Trump, nor any other president, should be allowed to use nuclear weapons except in response to a nuclear attack. By restricting the first use of nuclear weapons, this legislation enshrines that simple principle into law. I thank Rep. Lieu for his partnership on this common-sense bill during this critical time in our nation’s history.”

Support for the Restricting First Use of Nuclear Weapons Act of 2017:

William J. Perry, Former Secretary of Defense – “During my period as Secretary of Defense, I never confronted a situation, or could even imagine a situation, in which I would recommend that the President make a first strike with nuclear weapons—understanding that such an action, whatever the provocation, would likely bring about the end of civilization.  I believe that the legislation proposed by Congressman Lieu and Senator Markey recognizes that terrible reality.  Certainly a decision that momentous for all of civilization should have the kind of checks and balances on Executive powers called for by our Constitution.”

Tom Z. Collina, Policy Director of Ploughshares Fund – “President Trump now has the keys to the nuclear arsenal, the most deadly killing machine ever created. Within minutes, President Trump could unleash up to 1,000 nuclear weapons, each one many times more powerful than the Hiroshima bomb. Yet Congress has no voice in the most important decision the United States government can make. As it stands now, Congress has a larger role in deciding on the number of military bands than in preventing nuclear catastrophe.”

Derek Johnson, Executive Director of Global Zero – “One modern nuclear weapon is more destructive than all of the bombs detonated in World War II combined. Yet there is no check on a president’s ability to launch the thousands of nuclear weapons at his command. In the wake of the election, the American people are more concerned than ever about the terrible prospect of nuclear war — and what the next commander-in-chief will do with the proverbial ‘red button.’ That such devastating power is concentrated in one person is an affront to our democracy’s founding principles. The proposed legislation is an important first step to reining in this autocratic system and making the world safer from a nuclear catastrophe.”

Megan Amundson, Executive Director of Women’s Action for New Directions (WAND) – “Rep. Lieu and Sen. Markey have rightly called out the dangers of only one person having his or her finger on the nuclear button. The potential misuse of this power in the current global climate has only magnified this concern. It is time to make real progress toward lowering the risk that nuclear weapons are ever used again, and this legislation is a good start.”

Jeff Carter, Executive Director of Physicians for Social Responsibility – “Nuclear weapons pose an unacceptable risk to our national security. Even a “limited” use of nuclear weapons would cause catastrophic climate disruption around the world, including here in the United States. They are simply too profoundly dangerous for one person to be trusted with the power to introduce them into a conflict. Grounded in the fundamental constitutional provision that only Congress has the power to declare war, the Restricting First Use of Nuclear Weapons Act of 2017 is a wise and necessary step to lessen the chance these weapons will ever be used.”

Diane Randall, Executive Secretary of the Friends Committee on National Legislation (Quakers) – “Restricting first-use of nuclear weapons is an urgent priority. Congress should support the Markey-Lieu legislation.”

###

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

The Glorious Transhumanist Manifesto – Article by G. Stolyarov II

The Glorious Transhumanist Manifesto – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
February 6, 2013
******************************

Oblivion threatens to engulf us all, spreading toward us across the void of the dead past. Billions have succumbed to it already – precious universes of thought, feeling, and sensation, snuffed out by senescence, decay, and ruin. But we do not acquiesce. We fight the greatest war of all time, a war not against men but against what for ages was unquestioningly accepted as the human predicament. We do not meekly accept our limitations, but boldly strive to overcome them. We do not resign ourselves to lifespans of a paltry forty, sixty, eighty years. We do not concern ourselves with “putting our affairs in order” so that the next generation can repeat the same cycle of growth and decay, just a quarter-century removed in time. We journey not from cradle to grave, but from our present narrow confines to the vast expanses of space, time, and intelligence. We are transhumanists, who confront ruin itself with the courage and conviction that this foe, too, shall be overcome.

What are our weapons in the war against ruin? Reason and science, philosophy and technology, will and skill, persuasion and action. We do not accept that things must remain as they have been in recent memory. We recognize that the status quo is but a fleeting moment, and stability is an illusion. The choice for humanity is clear: we move forward in exponential progress, or ruin drags us down into the primeval bog. Evolution is cruel and has wiped out the overwhelming majority of all species. We must not become its victims. Existential risks continue to threaten us, but the worst risks are not those of our making. The greatest risk we face is that of our own fear and inaction, of allowing indifferent, thoughtlessly destructive forces of the wild to demolish what we and our ancestors have painstakingly built. Only mastery of nature, including our own biology, will enable us to preserve and amplify what we hold dear. Machines – from the tiniest nanobots to the most comprehensive networks of supercomputers and artificial intellects – will be our allies in our struggle. Eventually, they and we shall become inseparable. From them we shall gain faculties that biology alone could never provide. From us they shall gain life and reason.

Imagine the vast, open realm of possibilities for a being without a built-in expiration date. What could you do today if you knew that an inexhaustible succession of tomorrows awaited? No more would the nagging reminder of your forthcoming oblivion confine your focus to the most rudimentary of tasks. What would you attempt to learn, to experience, to build, to bring into your ever-widening sphere? We will give you the universe, if you accept it. And if you do not accept it right away, the splendor of the transhumanist world will continue to beckon. It will be a world devoid of the annihilation of the good, where the suffering of sentient beings will diminish until it is no more. It will be a world where each person will finally have the total liberty to think and innovate – the only treatment which properly respects the minds and dignity of rational beings. As the fruits of the human mind and its creations finally blossom all around you, partaking of them will be irresistible. But all this is not yet ours, and the future can only be what we make of it. The greatest struggles of all history await. Become a champion of the future, to prevent yourself from disappearing into the past. Fight the war on ruin, so you do not become its casualty. Become a transhumanist: you have nothing to lose but loss itself.

Common Misconceptions about Transhumanism – Article by G. Stolyarov II

Common Misconceptions about Transhumanism – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 26, 2013
******************************

After the publication of my review of Nassim Taleb’s latest book Antifragile, numerous comments were made by Taleb’s followers – many of them derisive – on Taleb’s Facebook page. (You can see a screenshot of these comments here.) While I will only delve into a few of the specific comments in this article, I consider it important to distill the common misconceptions that motivate them. Transhumanism is often misunderstood and maligned by who are ignorant of it – or those who were exposed solely to detractors such as John Gray, Leon Kass, and Taleb himself. This essay will serve to correct these misconceptions in a concise fashion. Those who still wish to criticize transhumanism should at least understand what they are criticizing and present arguments against the real ideas, rather than straw men constructed by the opponents of radical technological progress.

Misconception #1: Transhumanism is a religion.

Transhumanism does not posit the existence of any deity or other supernatural entity (though some transhumanists are religious independently of their transhumanism), nor does transhumanism hold a faith (belief without evidence) in any phenomenon, event, or outcome. Transhumanists certainly hope that technology will advance to radically improve human opportunities, abilities, and longevity – but this is a hope founded in the historical evidence of technological progress to date, and the logical extrapolation of such progress. Moreover, this is a contingent hope. Insofar as the future is unknowable, the exact trajectory of progress is difficult to predict, to say the least. Furthermore, the speed of progress depends on the skill, devotion, and liberty of the people involved in bringing it about. Some societal and political climates are more conducive to progress than others. Transhumanism does not rely on prophecy or mystical fiat. It merely posits a feasible and desirable future of radical technological progress and exhorts us to help achieve it. Some may claim that transhumanism is a religion that worships man – but that would distort the term “religion” so far from its original meaning as to render it vacuous and merely a pejorative used to label whatever system of thinking one dislikes. Besides, those who make that allegation would probably perceive a mere semantic quibble between seeking man’s advancement and worshipping him. But, irrespective of semantics, the facts do not support the view that transhumanism is a religion. After all, transhumanists do not spend their Sunday mornings singing songs and chanting praises to the Glory of Man.

Misconception #2: Transhumanism is a cult.

A cult, unlike a broader philosophy or religion, is characterized by extreme insularity and dependence on a closely controlling hierarchy of leaders. Transhumanism has neither element. Transhumanists are not urged to disassociate themselves from the wider world; indeed, they are frequently involved in advanced research, cutting-edge invention, and prominent activism. Furthermore, transhumanism does not have a hierarchy or leaders who demand obedience. Cosmopolitanism is a common trait among transhumanists. Respected thinkers, such as Ray Kurzweil, Max More, and Aubrey de Grey, are open to discussion and debate and have had interesting differences in their own views of the future. A still highly relevant conversation from 2002, “Max More and Ray Kurzweil on the Singularity“, highlights the sophisticated and tolerant way in which respected transhumanists compare and contrast their individual outlooks and attempt to make progress in their understanding. Any transhumanist is free to criticize any other transhumanist and to adopt some of another transhumanist’s ideas while rejecting others. Because transhumanism characterizes a loose network of thinkers and ideas, there is plenty of room for heterogeneity and intellectual evolution. As Max More put it in the “Principles of Extropy, v. 3.11”, “the world does not need another totalistic dogma.”  Transhumanism does not supplant all other aspects of an individual’s life and can coexist with numerous other interests, persuasions, personal relationships, and occupations.

Misconception #3: Transhumanists want to destroy humanity. Why else would they use terms such as “posthuman” and “postbiological”?

Transhumanists do not wish to destroy any human. In fact, we want to prolong the lives of as many people as possible, for as long as possible! The terms “transhuman” and “posthuman” refer to overcoming the historical limitations and failure modes of human beings – the precise vulnerabilities that have rendered life, in Thomas Hobbes’s words, “nasty, brutish, and short” for most of our species’ past. A species that transcends biology will continue to have biological elements. Indeed, my personal preference in such a future would be to retain all of my existing healthy biological capacities, but also to supplement them with other biological and non-biological enhancements that would greatly extend the length and quality of my life. No transhumanist wants human beings to die out and be replaced by intelligent machines, and every transhumanist wants today’s humans to survive to benefit from future technologies. Transhumanists who advocate the development of powerful artificial intelligence (AI) support either (i) integration of human beings with AI components or (ii) the harmonious coexistence of enhanced humans and autonomous AI entities. Even those transhumanists who advocate “mind backups” or “mind uploading” in an electronic medium (I am not one of them, as I explain here) do not wish for their biological existences to be intentionally destroyed. They conceive of mind uploads as contingency plans in case their biological bodies perish.

Even the “artilect war” anticipated by more pessimistic transhumanists such as Hugo de Garis is greatly misunderstood. Such a war, if it arises, would not come from advanced technology, but rather from reactionaries attempting to forcibly suppress technological advances and persecute their advocates. Most transhumanists do not consider this scenario to be likely in any event. More probable are lower-level protracted cultural disputes and clashes over particular technological developments.

Misconception #4: “A global theocracy envisioned by Moonies or the Taliban would be preferable to the kind of future these traitors to the human species have their hearts set on, because even the most joyless existence is preferable to oblivion.

The above was an actual comment on the Taleb Facebook thread. It is astonishing that anyone would consider theocratic oppression preferable to radical life extension, universal abundance, ever-expanding knowledge of macroscopic and microscopic realms, exploration of the universe, and the liberation of individuals from historical chains of oppression and parasitism. This misconception is fueled by the strange notion that transhumanists (or technological progress in general) will destroy us all – as exemplified by the “Terminator” scenario of hostile AI or the “gray goo” scenario of nanotechnology run amok. Yet all of the apocalyptic scenarios involving future technology lack the safeguards that elementary common sense would introduce. Furthermore, they lack the recognition that incentives generated by market forces, as well as the sheer numerical and intellectual superiority of the careful scientists over the rogues, would always tip the scales greatly in favor of the defenses against existential risk. As I explain in “Technology as the Solution to Existential Risk” and “Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail”,  the greatest existential risks have either always been with us (e.g., the risk of an asteroid impact with Earth) or are in humanity’s past (e.g., the risk of a nuclear holocaust annihilating civilization). Technology is the solution to such existential risks. Indeed, the greatest existential risk is fear of technology, which can retard or outright thwart the solutions to the perils that may, in the status quo, doom us as a species. As an example, Mark Waser has written an excellent commentary on the “inconvenient fact that not developing AI (in a timely fashion) to help mitigate other existential risks is itself likely to lead to a substantially increased existential risk”.

Misconception #5: Transhumanists want to turn people into the Borg from Star Trek.

The Borg are the epitome of a collectivistic society, where each individual is a cog in the giant species machine. Most transhumanists are ethical individualists, and even those who have communitarian leanings still greatly respect individual differences and promote individual flourishing and opportunity. Whatever their positions on the proper role of government in society might be, all transhumanists agree that individuals should not be destroyed or absorbed into a collective where they lose their personality and unique intellectual attributes. Even those transhumanists who wish for direct sharing of perceptions and information among individual minds do not advocate the elimination of individuality. Rather, their view might better be thought of as multiple puzzle pieces being joined but remaining capable of full separation and autonomous, unimpaired function.

My own attraction to transhumanism is precisely due to its possibilities for preserving individuals qua individuals and avoiding the loss of the precious internal universe of each person. As I expressed in Part 1 of my “Eliminating Death” video series, death is a horrendous waste of irreplaceable human talents, ideas, memories, skills, and direct experiences of the world. Just as transhumanists would recoil at the absorption of humankind into the Borg, so they rightly denounce the dissolution of individuality that presently occurs with the oblivion known as death.

Misconception #6: Transhumanists usually portray themselves “like robotic, anime-like characters”.

That depends on the transhumanist in question. Personally, I portray myself as me, wearing a suit and tie (which Taleb and his followers dislike just as much – but that is their loss). Furthermore, I see nothing robotic or anime-like about the public personas of Ray Kurzweil, Aubrey de Grey, or Max More, either.

Misconception #7: “Transhumanism is attracting devotees of a frighteningly high scientific caliber, morally retarded geniuses who just might be able to develop the humanity-obliterating technology they now merely fantasize about. It’s a lot like a Heaven’s Gate cult, but with prestigious degrees in physics and engineering, many millions more in financial backing, a growing foothold in mainstream culture, a long view of implementing their plan, and a death wish that extends to the whole human race not just themselves.

This is another statement on the Taleb Facebook thread. Ironically, the commenter is asserting that the transhumanists, who support the indefinite lengthening of human life, have a “death wish” and are “morally retarded”, while he – who opposes the technological progress needed to preserve us from the abyss of oblivion – apparently considers himself a champion of morality and a supporter of life. If ever there was an inversion of characterizations, this is it. At least the commenter acknowledges the strong technical skills of many transhumanists – but calling them “morally retarded” presupposes a counter-morality of death that should rightly be overcome and challenged, lest it sentence each of us to death. The Orwellian mindset that “evil is good” and “death is life” should be called out for the destructive and dangerous morass of contradictions that it is. Moreover, the commenter provides no evidence that any transhumanist wants to develop “humanity-obliterating technologies” or that the obliteration of humanity is even a remote risk from the technologies that transhumanists do advocate.

Misconception #8: Transhumanism is wrong because life would have no meaning without death.

Asserting that only death can give life meaning is another bizarre contradiction, and, moreover, a claim that life can have no intrinsic value or meaning qua life. It is sad indeed to think that some people do not see how they could enjoy life, pursue goals, and accumulate values in the absence of the imminent threat of their own oblivion. Clearly, this is a sign of a lack of creativity and appreciation for the wonderful fact that we are alive. I delve into this matter extensively in my “Eliminating Death” video series. Part 3 discusses how indefinite life extension leaves no room for boredom because the possibilities for action and entertainment increase in an accelerating manner. Parts 8 and 9 refute the premise that death gives motivation and a “sense of urgency” and make the opposite case – that indefinite longevity spurs people to action by making it possible to attain vast benefits over longer timeframes. Indefinite life extension would enable people to consider the longer-term consequences of their actions. On the other hand, in the status quo, death serves as the great de-motivator of meaningful human endeavors.

Misconception #9: Removing death is like removing volatility, which “fragilizes the system”.

This sentiment was an extrapolation by a commenter on Taleb’s ideas in Antifragile. It is subject to fundamentally collectivistic premises – that the “volatility” of individual death can be justified if it somehow supports a “greater whole”. (Who is advocating the sacrifice of the individual to the collective now?)  The fallacy here is to presuppose that the “greater whole” has value in and of itself, apart from the individuals comprising it. An individualist view of ethics and of society holds the opposite – that societies are formed for the mutual benefit of participating individuals, and the moment a society turns away from that purpose and starts to damage its participants instead of benefiting them, it ceases to be desirable. Furthermore, Taleb’s premise that suppression of volatility is a cause of fragility is itself dubious in many instances. It may work to a point with an individual organism whose immune system and muscles use volatility to build adaptive responses to external threats. However, the possibility of such an adaptive response requires very specific structures that do not exist in all systems. In the case of human death, there is no way in which the destruction of a non-violent and fundamentally decent individual can provide external benefits of any kind worth having. How would the death of your grandparents fortify the mythic “society” against anything?

Misconception #10: Immortality is “a bit like staying awake 24/7”.

Presumably, those who make this comparison think that indefinite life would be too monotonous for their tastes. But, in fact, humans who live indefinitely can still choose to sleep (or take vacations) if they wish. Death, on the other hand, is irreversible. Once you die, you are dead 24/7 – and you are not even given the opportunity to change your mind. Besides, why would it be tedious or monotonous to live a life full of possibilities, where an individual can have complete discretion over his pursuits and can discover as much about existence as his unlimited lifespan allows? To claim that living indefinitely would be monotonous is to misunderstand life itself, with all of its variety and heterogeneity.

Misconception #11: Transhumanism is unacceptable because of the drain on natural resources that comes from living longer.

This argument presupposes that resources are finite and incapable of being augmented by human technology and creativity. In fact, one era’s waste is another era’s treasure (as occurred with oil since the mid-19th century). As Julian Simon recognized, the ultimate resource is the human mind and its ability to discover new ways to harness natural laws to human benefit. We have more resources known and accessible to us now – both in terms of food and the inanimate bounties of the Earth – than ever before in recorded history. This has occurred in spite – and perhaps because of – dramatic population growth, which has also introduced many new brilliant minds into the human species. In Part 4 of my “Eliminating Death” video series, I explain that doomsday fears of overpopulation do not hold, either historically or prospectively. Indeed, the progress of technology is precisely what helps us overcome strains on natural resources.

Conclusion

The opposition to transhumanism is generally limited to espousing some variations of the common fallacies I identified above (with perhaps a few others thrown in). To make real intellectual progress, it is necessary to move beyond these fallacies, which serve as mental roadblocks to further exploration of the subject – a justification for people to consider transhumanism too weird, too unrealistic, or too repugnant to even take seriously. Detractors of transhumanism appear to recycle these same hackneyed remarks as a way to avoid seriously delving into the actual and genuinely interesting philosophical questions raised by emerging technological innovations. These are questions on which many transhumanists themselves hold sincere differences of understanding and opinion. Fundamentally, though, my aim here is not to “convert” the detractors – many of whose opposition is beyond the reach of reason, for it is not motivated by reason. Rather, it is to speak to laypeople who are not yet swayed one way or the other, but who might not have otherwise learned of transhumanism except through the filter of those who distort and grossly misunderstand it. Even an elementary explication of what transhumanism actually stands for will reveal that we do, in fact, strongly advocate individual human life and flourishing, as well as technological progress that will uplift every person’s quality of life and range of opportunities. Those who disagree with any transhumanist about specific means for achieving these goals are welcome to engage in a conversation or debate about the merits of any given pathway. But an indispensable starting point for such interaction involves accepting that transhumanists are serious thinkers, friends of human life, and sincere advocates of improving the human condition.

Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail – Video by G. Stolyarov II

Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail – Video by G. Stolyarov II

Doomsday predictions are not only silly but bring about harmful ways of approaching life and the world. Mr. Stolyarov expresses his view that there will never be an end of the world, an end of humanity, or an end of civilization. While some genuine existential risks do exist, most of them are not man-made, and even the man-made risks are largely in the past.

References

– “Transhumanism and the 2nd Law of Thermodynamics” – Video by G. Stolyarov II