Browsed by
Tag: Max More

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Gennady Stolyarov II
Zoltan Istvan
Max More
Ben Goertzel
Natasha Vita-More


Gennady Stolyarov II, Chairman of the United States Transhumanist Party, moderated this panel discussion, entitled “Advocating for the Future”, at RAAD Fest 2017 on August 11, 2017, in San Diego, California.

Watch it on YouTube here.

From left to right, the panelists are Zoltan Istvan, Gennady Stolyarov II, Max More, Ben Goertzel, and Natasha Vita-More. With these leading transhumanist luminaries, Mr. Stolyarov discussed subjects such as what the transhumanist movement will look like in 2030, artificial intelligence and sources of existential risk, gamification and the use of games to motivate young people to create a better future, and how to persuade large numbers of people to support life-extension research with at least the same degree of enthusiasm that they display toward the fight against specific diseases.

Learn more about RAAD Fest here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentations of Gennady Stolyarov II and Zoltan Istvan from the “Advocating for the Future” panel.

Elon Musk and Merging With Machines – Article by Edward Hudgins

Elon Musk and Merging With Machines – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
******************************

Elon Musk seems to be on board with the argument that, as a news headline sums up, “Humans must merge with machines or become irrelevant in AI age.” The PayPal co-founder and SpaceX and Tesla Motors innovator has, in the past, expressed concern about deep AI. He even had a cameo in Transcendence, a Johnny Depp film that was a cautionary tale about humans becoming machines.

Has Musk changed his views? What should we think?

Human-machine symbiosis

Musk said in a speech this week at the opening of Tesla in Dubai warned governments to “Make sure researchers don’t get carried away — scientists get so engrossed in their work they don’t realize what they are doing. But he also said that “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.” In techno-speak he told listeners that “Some high-bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence.” Imagine calculating a rocket trajectory by just thinking about it since your brain and the Artificial Intelligence with which it links are one!

This is, of course, the vision that is the goal of Ray Kurzweil and Peter Diamandis, co-founders of Singularity University. It is the Transhumanist vision of philosopher Max More. It is a vision of exponential technologies that could even help us live forever.

AI doubts?

But in the past, Musk has expressed doubts about AI. In July 2015, he signed onto “Autonomous Weapons: an Open Letter from AI & Robotics Researchers,” which warned that such devices could “select and engage targets without human intervention.” Yes, out-of-control killer robots! But it concluded that “We believe that AI has great potential to benefit humanity in many ways … Starting a military AI arms race is a bad idea…” The letter was also signed by Diamandis, one of the foremost AI proponents. So it’s fair to say that Musk was simply offering reasonable caution.

In Werner Herzog’s documentary Lo and Behold: Reveries of a Connected World, Musk explained that “I think that the biggest risk is not that the AI will develop a will of its own but rather that it will follow the will of people that establish its utility function.” He offered, “If you were a hedge fund or private equity fund and you said, ‘Well, all I want my AI to do is maximize the value of my portfolio,’ then the AI could decide … to short consumer stocks, go long defense stocks, and start a war.” We wonder if the AI would appreciate that in the long-run, cities in ruins from war would harm the portfolio? In any case, Musk again seems to offer reasonable caution rather than blanket denunciations.

But in his Dubai remarks, he still seemed reticent. Should he and we be worried?

Why move ahead with AI?

Exponential technologies already have revolutionized communications and information and are doing the same to our biology. In the short-term, human-AI interfaces, genetic engineering, and nanotech all promise to enhance our human capacities, to make us smarter, quicker of mind, healthier, and long-lived.

In the long-term Diamandis contends that “Enabled with [brain-computer interfaces] and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.”

What does this mean? If we are truly Transhuman, will we be soulless Star Trek Borgs rather than Datas seeking a better human soul? There has been much deep thinking about such question but I don’t know and neither does anyone else.

In the 1937 Ayn Rand short novel Anthem, we see an impoverished dystopia governed by a totalitarian elites. We read that “It took fifty years to secure the approval of all the Councils for the Candle, and to decide on the number needed.”

Proactionary!

Many elites today are in the throes of the “precautionary principle.” It holds that if an action or policy has a suspected risk of causing harm … the burden of proof that it is not harmful falls on those proposing the action or policy. Under this “don’t do anything for the first time” illogic, humans would never have used fire, much less candles.

By contrast, Max More offers the “proactionary principle.” It holds that we should assess risks according to available science, not popular perception, account for both risks the costs of opportunities foregone, and protect people’s freedom to experiment, innovate, and progress.

Diamandis, More and, let’s hope, Musk are the same path to a future we can’t predict but which we know can be beyond our most optimistic dreams. And you should be on that path too!

Explore:

Edward Hudgins, “Public Opposition to Biotech Endangers Your Life and Health“. July 28, 2016.

Edward Hudgins, “The Robots of Labor Day“. September 2, 2015.

Edward Hudgins, “Google, Entrepreneurs, and Living 500 Years“. March 12, 2015.

Dr. Edward Hudgins is the director of advocacy for The Atlas Society and the editor and author of several books on politics and government policy. He is also a member of the U.S. Transhumanist Party

Will Banning Genetic Engineering Kill You? – Article by Edward Hudgins

Will Banning Genetic Engineering Kill You? – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
******************************

One headline reads “British baby given genetically-edited immune cells to beat cancer in world first.” Another headline reads “Top biologists debate ban on gene-editing.” It’s a literal life-and-death debate.

And if you care to live, pay attention to this philosophical clash!

Exponential growth in genetic engineering

Genetic engineering is on an exponential growth path. In 2001 the cost of sequencing a human-sized genome was about $100 million. By 2007 the cost was down to $10 million.

layla-richard-genetic-engineeringNow it’s just over $1,000. Scientists and even do-it-yourself biohackers can now cheaply access DNA information that could allow them to discover cures for diseases and much more.

Recently, for example, baby Layla Richards [at right] was diagnosed with leukemia. But when none of the usual treatments worked, doctors created designer immune cells, injected them into the little girl and the treatment worked. She was cured.

Designer babies?

But there have been concerns about such engineering for decades; indeed, precautionary guidelines were drawn up by a group of biologists at the 1975 Asilomar conference in California. And now, at a joint conference in Washington, D.C. of the National Academies of Medicine and Sciences, the Chinese Academy of Sciences and the Royal Society of the United Kingdom, a cutting-edge genetic engineering tool known as CRISPR-Cas9 came under attack because it can be used to edit the genomes of sperm, eggs, and embryos.

National Institutes of Health director Francis Collins argued that the children that would result from such editing “can’t give consent to having their genomes altered” and that “the individuals whose lives are potentially affected by germline manipulation could extend many generations into the future.” Hille Haker, a Catholic theologian from Loyola University Chicago, agreed and proposed a two year ban on all research into such manipulation of genomes. Others argued that such manipulation could lead to “designer babies,” that is, parents using this technology to improve or enhance the intelligence and strength of their children.

These arguments are bizarre to say the least.

Damning to misery

To begin with, there is virtual universal agreement among religious and secular folk alike that from birth and until a stage of maturity at which they can potentially guide their lives by their own reason, the consent of children is not needed when their parents make many potentially life-altering decisions for them. Why should this reasonable rule be different for decisions made by parents before a child is born?

And consider that the principal decisions with gene-editing technology would be to eliminate the possibility of the child later in life having Alzheimer’s or Parkinson’s diseases, cancers, and a host of other ailments that plague humanity. Is it even conceivable that any rational individual would not thank their parents for ensuring their health and longevity? Isn’t this what all parents wish for their children? Why would anyone deny parents the tools to ensure healthy children? How much continued misery and death are those who would delay genetic research or ban this new technology inflicting on parents and children alike?

And so what if the “slippery slope” is parents ensuring that their children are more intelligent or stronger? Right now such traits are a matter of a genetic lottery and every parent hopes for the best. What parent wouldn’t jump at the chance to ensure such beneficial capacities for their children?

A privileged biological elite?

Some might pull out the ugly egalitarian argument that the “rich” could produce biologically elite “superchildren,” leaving the rest of humanity behind: an inferior, impoverished breed to be exploited. But this is the same spurious argument made about every technology that initially allows more prosperous individuals to better themselves ahead of others. We heard two decades ago that only the “rich” would be able to afford computers and the internet, allowing them to be more informed and, thus, enabling them to oppress the downtrodden masses. But exponential changes in technologies ensure that just as computers and the internet have become inexpensive and available to all, so will genetic enhancements become after the techniques are perfected for prosperous beta-testers.

And in any case, just as it is immoral to deprive those who honestly earn their wealth of the fruits of their labor just because others have yet to earn theirs, so it is immoral to deprive them of the opportunity to provide the best biology for their children just because it will take time for the technology to become available to all.

Precautionary principle or proactionary principle?

Many opponents of genetic engineering fall back on the so-called “precautionary principle.” This is the notion that if products or technologies pose any imaginable risks—often highly speculative or vague ones unsupported by any sound science—then such products or technologies should be severely restricted, regulated, or banned. The burden is placed on innovators to prove that no harm to humans will result from their innovations.

But had this standard been applied in the past, we would not have the modern world today. Indeed, by this standard, precaution would dictate that fire was just too dangerous for humans and that cavemen should have been barred from rubbing two sticks together.

Max More, a founder of the transhumanist philosophy, offers instead the “proactionary principle.” He argues that “People’s freedom to innovate technologically is highly valuable, even critical, to humanity.” And “Progress should not bow to fear, but should proceed with eyes wide open.” And that we need to “Protect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.”

Freedom to progress

Fortunately, more individuals than More reason this way. At the D.C. conference, University of Manchester Professor John Harris argued “We all have an inescapable moral duty: To continue with scientific investigation to the point at which we can make a rational choice. We are not yet at that point. It seems to me, consideration of a moratorium is the wrong course. Research is necessary.” But the opinion of academics one way or another might not matter. Just as it was do-it-yourselfers and innovators in garages that made the computer and information revolution, genetic innovations might well come from such achievers as well. But they won’t do it if they are not free to do so.

If you value your life and the lives and health of your children, you had better work for this freedom to innovate.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright The Atlas Society. For more information, please visit www.atlassociety.org.

Gennady Stolyarov II Interviewed on Transhumanism by Rebecca Savastio of Guardian Liberty Voice

Gennady Stolyarov II Interviewed on Transhumanism by Rebecca Savastio of Guardian Liberty Voice

The New Renaissance Hat
G. Stolyarov II
May 26, 2014
******************************
Rebecca Savastio of Guardian Liberty Voice has published an excellent interview with me, which mentions Death is Wrong in its introduction and delves into various questions surrounding transhumanism and emerging technologies. In my responses, I also make reference to writings by Ray Kurzweil, Max More, Julian Simon, and Singularity Utopia. Additionally, I cite my 2010 essay, “How Can I Live Forever: What Does and Does Not Preserve the Self“.
***
I was pleased to be able to advocate in favor of transformative technological progress on multiple fronts.
***
Read Ms. Savastio’s article containing the interview: “Gennady Stolyarov on Transhumanism, Google Glass, Kurzweil, and Singularity“.
Common Misconceptions about Transhumanism – Article by G. Stolyarov II

Common Misconceptions about Transhumanism – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 26, 2013
******************************

After the publication of my review of Nassim Taleb’s latest book Antifragile, numerous comments were made by Taleb’s followers – many of them derisive – on Taleb’s Facebook page. (You can see a screenshot of these comments here.) While I will only delve into a few of the specific comments in this article, I consider it important to distill the common misconceptions that motivate them. Transhumanism is often misunderstood and maligned by who are ignorant of it – or those who were exposed solely to detractors such as John Gray, Leon Kass, and Taleb himself. This essay will serve to correct these misconceptions in a concise fashion. Those who still wish to criticize transhumanism should at least understand what they are criticizing and present arguments against the real ideas, rather than straw men constructed by the opponents of radical technological progress.

Misconception #1: Transhumanism is a religion.

Transhumanism does not posit the existence of any deity or other supernatural entity (though some transhumanists are religious independently of their transhumanism), nor does transhumanism hold a faith (belief without evidence) in any phenomenon, event, or outcome. Transhumanists certainly hope that technology will advance to radically improve human opportunities, abilities, and longevity – but this is a hope founded in the historical evidence of technological progress to date, and the logical extrapolation of such progress. Moreover, this is a contingent hope. Insofar as the future is unknowable, the exact trajectory of progress is difficult to predict, to say the least. Furthermore, the speed of progress depends on the skill, devotion, and liberty of the people involved in bringing it about. Some societal and political climates are more conducive to progress than others. Transhumanism does not rely on prophecy or mystical fiat. It merely posits a feasible and desirable future of radical technological progress and exhorts us to help achieve it. Some may claim that transhumanism is a religion that worships man – but that would distort the term “religion” so far from its original meaning as to render it vacuous and merely a pejorative used to label whatever system of thinking one dislikes. Besides, those who make that allegation would probably perceive a mere semantic quibble between seeking man’s advancement and worshipping him. But, irrespective of semantics, the facts do not support the view that transhumanism is a religion. After all, transhumanists do not spend their Sunday mornings singing songs and chanting praises to the Glory of Man.

Misconception #2: Transhumanism is a cult.

A cult, unlike a broader philosophy or religion, is characterized by extreme insularity and dependence on a closely controlling hierarchy of leaders. Transhumanism has neither element. Transhumanists are not urged to disassociate themselves from the wider world; indeed, they are frequently involved in advanced research, cutting-edge invention, and prominent activism. Furthermore, transhumanism does not have a hierarchy or leaders who demand obedience. Cosmopolitanism is a common trait among transhumanists. Respected thinkers, such as Ray Kurzweil, Max More, and Aubrey de Grey, are open to discussion and debate and have had interesting differences in their own views of the future. A still highly relevant conversation from 2002, “Max More and Ray Kurzweil on the Singularity“, highlights the sophisticated and tolerant way in which respected transhumanists compare and contrast their individual outlooks and attempt to make progress in their understanding. Any transhumanist is free to criticize any other transhumanist and to adopt some of another transhumanist’s ideas while rejecting others. Because transhumanism characterizes a loose network of thinkers and ideas, there is plenty of room for heterogeneity and intellectual evolution. As Max More put it in the “Principles of Extropy, v. 3.11”, “the world does not need another totalistic dogma.”  Transhumanism does not supplant all other aspects of an individual’s life and can coexist with numerous other interests, persuasions, personal relationships, and occupations.

Misconception #3: Transhumanists want to destroy humanity. Why else would they use terms such as “posthuman” and “postbiological”?

Transhumanists do not wish to destroy any human. In fact, we want to prolong the lives of as many people as possible, for as long as possible! The terms “transhuman” and “posthuman” refer to overcoming the historical limitations and failure modes of human beings – the precise vulnerabilities that have rendered life, in Thomas Hobbes’s words, “nasty, brutish, and short” for most of our species’ past. A species that transcends biology will continue to have biological elements. Indeed, my personal preference in such a future would be to retain all of my existing healthy biological capacities, but also to supplement them with other biological and non-biological enhancements that would greatly extend the length and quality of my life. No transhumanist wants human beings to die out and be replaced by intelligent machines, and every transhumanist wants today’s humans to survive to benefit from future technologies. Transhumanists who advocate the development of powerful artificial intelligence (AI) support either (i) integration of human beings with AI components or (ii) the harmonious coexistence of enhanced humans and autonomous AI entities. Even those transhumanists who advocate “mind backups” or “mind uploading” in an electronic medium (I am not one of them, as I explain here) do not wish for their biological existences to be intentionally destroyed. They conceive of mind uploads as contingency plans in case their biological bodies perish.

Even the “artilect war” anticipated by more pessimistic transhumanists such as Hugo de Garis is greatly misunderstood. Such a war, if it arises, would not come from advanced technology, but rather from reactionaries attempting to forcibly suppress technological advances and persecute their advocates. Most transhumanists do not consider this scenario to be likely in any event. More probable are lower-level protracted cultural disputes and clashes over particular technological developments.

Misconception #4: “A global theocracy envisioned by Moonies or the Taliban would be preferable to the kind of future these traitors to the human species have their hearts set on, because even the most joyless existence is preferable to oblivion.

The above was an actual comment on the Taleb Facebook thread. It is astonishing that anyone would consider theocratic oppression preferable to radical life extension, universal abundance, ever-expanding knowledge of macroscopic and microscopic realms, exploration of the universe, and the liberation of individuals from historical chains of oppression and parasitism. This misconception is fueled by the strange notion that transhumanists (or technological progress in general) will destroy us all – as exemplified by the “Terminator” scenario of hostile AI or the “gray goo” scenario of nanotechnology run amok. Yet all of the apocalyptic scenarios involving future technology lack the safeguards that elementary common sense would introduce. Furthermore, they lack the recognition that incentives generated by market forces, as well as the sheer numerical and intellectual superiority of the careful scientists over the rogues, would always tip the scales greatly in favor of the defenses against existential risk. As I explain in “Technology as the Solution to Existential Risk” and “Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail”,  the greatest existential risks have either always been with us (e.g., the risk of an asteroid impact with Earth) or are in humanity’s past (e.g., the risk of a nuclear holocaust annihilating civilization). Technology is the solution to such existential risks. Indeed, the greatest existential risk is fear of technology, which can retard or outright thwart the solutions to the perils that may, in the status quo, doom us as a species. As an example, Mark Waser has written an excellent commentary on the “inconvenient fact that not developing AI (in a timely fashion) to help mitigate other existential risks is itself likely to lead to a substantially increased existential risk”.

Misconception #5: Transhumanists want to turn people into the Borg from Star Trek.

The Borg are the epitome of a collectivistic society, where each individual is a cog in the giant species machine. Most transhumanists are ethical individualists, and even those who have communitarian leanings still greatly respect individual differences and promote individual flourishing and opportunity. Whatever their positions on the proper role of government in society might be, all transhumanists agree that individuals should not be destroyed or absorbed into a collective where they lose their personality and unique intellectual attributes. Even those transhumanists who wish for direct sharing of perceptions and information among individual minds do not advocate the elimination of individuality. Rather, their view might better be thought of as multiple puzzle pieces being joined but remaining capable of full separation and autonomous, unimpaired function.

My own attraction to transhumanism is precisely due to its possibilities for preserving individuals qua individuals and avoiding the loss of the precious internal universe of each person. As I expressed in Part 1 of my “Eliminating Death” video series, death is a horrendous waste of irreplaceable human talents, ideas, memories, skills, and direct experiences of the world. Just as transhumanists would recoil at the absorption of humankind into the Borg, so they rightly denounce the dissolution of individuality that presently occurs with the oblivion known as death.

Misconception #6: Transhumanists usually portray themselves “like robotic, anime-like characters”.

That depends on the transhumanist in question. Personally, I portray myself as me, wearing a suit and tie (which Taleb and his followers dislike just as much – but that is their loss). Furthermore, I see nothing robotic or anime-like about the public personas of Ray Kurzweil, Aubrey de Grey, or Max More, either.

Misconception #7: “Transhumanism is attracting devotees of a frighteningly high scientific caliber, morally retarded geniuses who just might be able to develop the humanity-obliterating technology they now merely fantasize about. It’s a lot like a Heaven’s Gate cult, but with prestigious degrees in physics and engineering, many millions more in financial backing, a growing foothold in mainstream culture, a long view of implementing their plan, and a death wish that extends to the whole human race not just themselves.

This is another statement on the Taleb Facebook thread. Ironically, the commenter is asserting that the transhumanists, who support the indefinite lengthening of human life, have a “death wish” and are “morally retarded”, while he – who opposes the technological progress needed to preserve us from the abyss of oblivion – apparently considers himself a champion of morality and a supporter of life. If ever there was an inversion of characterizations, this is it. At least the commenter acknowledges the strong technical skills of many transhumanists – but calling them “morally retarded” presupposes a counter-morality of death that should rightly be overcome and challenged, lest it sentence each of us to death. The Orwellian mindset that “evil is good” and “death is life” should be called out for the destructive and dangerous morass of contradictions that it is. Moreover, the commenter provides no evidence that any transhumanist wants to develop “humanity-obliterating technologies” or that the obliteration of humanity is even a remote risk from the technologies that transhumanists do advocate.

Misconception #8: Transhumanism is wrong because life would have no meaning without death.

Asserting that only death can give life meaning is another bizarre contradiction, and, moreover, a claim that life can have no intrinsic value or meaning qua life. It is sad indeed to think that some people do not see how they could enjoy life, pursue goals, and accumulate values in the absence of the imminent threat of their own oblivion. Clearly, this is a sign of a lack of creativity and appreciation for the wonderful fact that we are alive. I delve into this matter extensively in my “Eliminating Death” video series. Part 3 discusses how indefinite life extension leaves no room for boredom because the possibilities for action and entertainment increase in an accelerating manner. Parts 8 and 9 refute the premise that death gives motivation and a “sense of urgency” and make the opposite case – that indefinite longevity spurs people to action by making it possible to attain vast benefits over longer timeframes. Indefinite life extension would enable people to consider the longer-term consequences of their actions. On the other hand, in the status quo, death serves as the great de-motivator of meaningful human endeavors.

Misconception #9: Removing death is like removing volatility, which “fragilizes the system”.

This sentiment was an extrapolation by a commenter on Taleb’s ideas in Antifragile. It is subject to fundamentally collectivistic premises – that the “volatility” of individual death can be justified if it somehow supports a “greater whole”. (Who is advocating the sacrifice of the individual to the collective now?)  The fallacy here is to presuppose that the “greater whole” has value in and of itself, apart from the individuals comprising it. An individualist view of ethics and of society holds the opposite – that societies are formed for the mutual benefit of participating individuals, and the moment a society turns away from that purpose and starts to damage its participants instead of benefiting them, it ceases to be desirable. Furthermore, Taleb’s premise that suppression of volatility is a cause of fragility is itself dubious in many instances. It may work to a point with an individual organism whose immune system and muscles use volatility to build adaptive responses to external threats. However, the possibility of such an adaptive response requires very specific structures that do not exist in all systems. In the case of human death, there is no way in which the destruction of a non-violent and fundamentally decent individual can provide external benefits of any kind worth having. How would the death of your grandparents fortify the mythic “society” against anything?

Misconception #10: Immortality is “a bit like staying awake 24/7”.

Presumably, those who make this comparison think that indefinite life would be too monotonous for their tastes. But, in fact, humans who live indefinitely can still choose to sleep (or take vacations) if they wish. Death, on the other hand, is irreversible. Once you die, you are dead 24/7 – and you are not even given the opportunity to change your mind. Besides, why would it be tedious or monotonous to live a life full of possibilities, where an individual can have complete discretion over his pursuits and can discover as much about existence as his unlimited lifespan allows? To claim that living indefinitely would be monotonous is to misunderstand life itself, with all of its variety and heterogeneity.

Misconception #11: Transhumanism is unacceptable because of the drain on natural resources that comes from living longer.

This argument presupposes that resources are finite and incapable of being augmented by human technology and creativity. In fact, one era’s waste is another era’s treasure (as occurred with oil since the mid-19th century). As Julian Simon recognized, the ultimate resource is the human mind and its ability to discover new ways to harness natural laws to human benefit. We have more resources known and accessible to us now – both in terms of food and the inanimate bounties of the Earth – than ever before in recorded history. This has occurred in spite – and perhaps because of – dramatic population growth, which has also introduced many new brilliant minds into the human species. In Part 4 of my “Eliminating Death” video series, I explain that doomsday fears of overpopulation do not hold, either historically or prospectively. Indeed, the progress of technology is precisely what helps us overcome strains on natural resources.

Conclusion

The opposition to transhumanism is generally limited to espousing some variations of the common fallacies I identified above (with perhaps a few others thrown in). To make real intellectual progress, it is necessary to move beyond these fallacies, which serve as mental roadblocks to further exploration of the subject – a justification for people to consider transhumanism too weird, too unrealistic, or too repugnant to even take seriously. Detractors of transhumanism appear to recycle these same hackneyed remarks as a way to avoid seriously delving into the actual and genuinely interesting philosophical questions raised by emerging technological innovations. These are questions on which many transhumanists themselves hold sincere differences of understanding and opinion. Fundamentally, though, my aim here is not to “convert” the detractors – many of whose opposition is beyond the reach of reason, for it is not motivated by reason. Rather, it is to speak to laypeople who are not yet swayed one way or the other, but who might not have otherwise learned of transhumanism except through the filter of those who distort and grossly misunderstand it. Even an elementary explication of what transhumanism actually stands for will reveal that we do, in fact, strongly advocate individual human life and flourishing, as well as technological progress that will uplift every person’s quality of life and range of opportunities. Those who disagree with any transhumanist about specific means for achieving these goals are welcome to engage in a conversation or debate about the merits of any given pathway. But an indispensable starting point for such interaction involves accepting that transhumanists are serious thinkers, friends of human life, and sincere advocates of improving the human condition.