Tag Archives: Ray Kurzweil

by

Discussion on Life-Extension Advocacy – G. Stolyarov II Answers Audience Questions

No comments yet

Categories: Philosophy, Transhumanism, Tags: , , , , , , , , , , , , , , , , ,

The New Renaissance Hat

G. Stolyarov II

******************************

Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, answers audience questions regarding life-extension advocacy and possibilities for broadening the reach of transhumanist and life-extensionist ideas.

While we were unable to get into contact with our intended guest, Chris Monteiro, we were nonetheless able to have a productive, wide-ranging discussion that addressed many areas of emerging technologies, as well as trends in societal attitudes towards them and related issues of cosmopolitanism, ideology, and the need for a new comprehensive philosophical paradigm of transmodernism or hypermodernism that would build off of the legacy of the 18th-century Age of Enlightenment.

Become a member of the U.S. Transhumanist Party for free. Apply here.

by

Are We Entering The Age of Exponential Growth? – Article by Marian L. Tupy

No comments yet

Categories: Science, Technology, Transhumanism, Tags: , , , , , , , , , , , , , , ,

The New Renaissance HatMarian L. Tupy
******************************

In his 1999 book The Age of Spiritual Machines, the famed futurist Ray Kurzweil proposed “The Law of Accelerating Returns.” According to Kurzweil’s law, “the rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially.” I mention Kurzweil’s observation, because it is sure beginning to feel like we are entering an age of colossal and rapid change. Consider the following:

According to The Telegraph, “Genes which make people intelligent have been discovered [by researchers at the Imperial College London] and scientists believe they could be manipulated to boost brain power.” This could usher in an era of super-smart humans and accelerate the already fast process of scientific discovery.

Elon Musk’s SpaceX Falcon 9 rocket has successfully “blasted off from Cape Canaveral, delivered communications satellites to orbit before its main-stage booster returned to a landing pad.” Put differently, space flight has just become much cheaper since main-stage booster rockets, which were previously non-reusable, are also very expensive.

The CEO of Merck has announced a major breakthrough in the fight against lung cancer. Keytruda “is a new category of drugs that stimulates the body’s immune system.” “Using Keytruda,” Kenneth Frazier said, “will extend [the life of lung cancer sufferers] … by approximately 13 months on average. We know that it will reduce the risk of death by 30-40 percent for people who had failed on standard chemo-therapy.”

Also, there has been massive progress in the development of “edible electronics.” New technology developed by Bristol Robotics Laboratory “will allow the doctor to feel inside your body without making a single incision, effectively taking the tips of the doctor’s fingers and transplant them onto the exterior of the [edible] robotic pill. When the robot presses against the interior of the intestinal tract, the doctor will feel the sensation as if her own fingers were pressing the flesh.”

Marian L. Tupy is the editor of HumanProgress.org and a senior policy analyst at the Center for Global Liberty and Prosperity. He specializes in globalization and global wellbeing, and the political economy of Europe and sub-Saharan Africa. His articles have been published in the Financial Times, Washington Post, Los Angeles Times, Wall Street Journal, U.S. News and World Report, The Atlantic, Newsweek, The U.K. Spectator, Weekly Standard, Foreign Policy, Reason magazine, and various other outlets both in the United States and overseas. Tupy has appeared on The NewsHour with Jim Lehrer, CNN International, BBC World, CNBC, MSNBC, Al Jazeera, and other channels. He has worked on the Council on Foreign Relations’ Commission on Angola, testified before the U.S. Congress on the economic situation in Zimbabwe, and briefed the Central Intelligence Agency and the State Department on political developments in Central Europe. Tupy received his B.A. in international relations and classics from the University of the Witwatersrand in Johannesburg, South Africa, and his Ph.D. in international relations from the University of St. Andrews in Great Britain.

This work by Cato Institute is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

by

How Anti-Individualist Fallacies Prevent Us from Curing Death – Article by Edward Hudgins

No comments yet

Categories: Philosophy, Technology, Transhumanism, Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

The New Renaissance HatEdward Hudgins
July 3, 2015
******************************

Are you excited about Silicon Valley entrepreneurs investing billions of dollars to extend life and even “cure” death?

It’s amazing that such technologically challenging goals have gone from sci-fi fantasies to fantastic possibilities. But the biggest obstacles to life extension could be cultural: the anti-individualist fallacies arrayed against this goal.

Entrepreneurs defy death

 A recent Washington Post feature documents the “Tech titans’ latest project: Defy death. “ Peter Thiel, PayPal co-founder and venture capitalist, has led the way, raising awareness and funding regenerative medicines. He explains: “I’ve always had this really strong sense that death was a terrible, terrible thing… Most people end up compartmentalizing and they are in some weird mode of denial and acceptance about death, but they both have the result of making you very passive. I prefer to fight it.”

Others prefer to fight as well. Google CEO Larry Page created Calico to invest in start-ups working to stop aging. Oracle’s Larry Ellison has also provided major money for anti-aging research. Google’s Sergey Brin and Facebook’s Mark Zuckerberg both have funded the Breakthrough Prize in Life Sciences Foundation.

Beyond the Post piece we can applaud the education in the exponential technologies needed to reach these goals by Singularity U., co-founded by futurist Ray Kurzweil, who believes humans and machines will merge in the decades to become transhumans, and X-Prize founder Peter Diamandis.

The Post piece points out that while in the past two-thirds of science and medical research was funded by the federal government, today private parties put up two-thirds. These benefactors bring their entrepreneurial talents to their philanthropic efforts. They are restless for results and not satisfied with the slow pace of government bureaucracies plagued by red tape and politics.

“Wonderful!” you’re thinking. “Who could object?”

Laurie Zoloth’s inequality fallacy

 Laurie Zoloth for one. This Northwestern University bioethicist argues that “Making scientific progress faster doesn’t necessarily mean better — unless if you’re an aging philanthropist and want an answer in your lifetime.” The Post quotes her further as saying that “Science is about an arc of knowledge, and it can take a long time to play out.”

Understanding the world through science is a never-ending enterprise. But in this case, science is also about billionaires wanting answers in their lifetimes because they value their own lives foremost and they do not want them to end. And the problem is?

Zoloth grants that it is ”wonderful to be part of a species that dreams in a big way” but she also wants “to be part of a species that takes care of the poor and the dying.” Wouldn’t delaying or even eliminating dying be even better?

The discoveries these billionaires facilitate will help millions of people in the long-run. But her objection seems rooted in a morally-distorted affinity for equality of condition: the feeling that it is wrong for some folks to have more than others—never mind that they earned it—in this case early access to life-extending technologies. She seems to feel that it is wrong for these billionaires to put their own lives, loves, dreams, and well-being first.

We’ve heard this “equality” nonsense for every technological advance: only elites will have electricity, telephones, radios, TVs, computers, the internet, smartphones, whatever. Yes, there are first adopters, those who can afford new things. Without them footing the bills early on, new technologies would never become widespread and affordable. This point should be blindingly obvious today, since the spread of new technologies in recent decades has accelerated. But in any case, the moral essential is that it is right for individuals to seek the best for themselves while respecting their neighbors’ liberty to do the same.

Leon Kass’s “long life is meaningless” fallacy

 The Post piece attributes to political theorist Francis Fukuyama the belief that “a large increase in human life spans would take away people’s motivation for the adaptation necessary for survival. In that kind of world, social change comes to a standstill.”

Nonsense! As average lifespans doubled in past centuries, social change—mostly for the better—accelerated. Increased lifespans in the future could allow individuals to take on projects spanning centuries rather than decades. Indeed, all who love their lives regret that they won’t live to see, experience, and help create the wonders of tomorrow.

The Post cites physician and ethicist Leon Kass who asks: “Could life be serious or meaningful without the limit of mortality?”

Is Kass so limited in imagination or ignorant of our world that he doesn’t appreciate the great, long-term projects that could engage us as individuals seriously and meaningfully for centuries to come? (I personally would love to have the centuries needed to work on terraforming Mars, making it a new habitat for humanity!)

Fukuyama and Kass have missed the profound human truth that we each as individuals create the meaning for our own lives, whether we live 50 years or 500. Meaning and purpose are what only we can give ourselves as we pursue productive achievements that call upon the best within us.

Francis Fukuyama’s anti-individualist fallacy

 The Post piece quotes Fukuyama as saying “I think that research into life extension is going to end up being a big social disaster… Extending the average human life span is a great example of something that is individually desirable by almost everyone but collectively not a good thing. For evolutionary reasons, there is a good reason why we die when we do.”

What a morally twisted reason for opposing life extension! Millions of individuals should literally damn themselves to death in the name of society. Then count me anti-social.

Some might take from Fukuyama’s premise a concern that millions of individuals living to 150 will spend half that time bedridden, vegetating, consuming resources, and not producing. But the life extension goal is to live long with our capacities intact—or enhanced! We want 140 to be the new 40!

What could be good evolutionary reasons why we die when we do? Evolution only metaphorically has “reasons.” It is a biological process that blindly adapted us to survive and reproduce: it didn’t render us immune to ailments. Because life is the ultimate value, curing those ailments rather than passively suffering them is the goal of medicine. Life extension simply takes the maintenance of human life a giant leap further.

Live long and prosper

 Yes, there will be serious ethical questions to face as the research sponsored by benevolent billionaires bears fruit. But individuals who want to live really long and prosper in a world of fellow achievers need to promote human life as the ultimate value and the right of all individuals to live their own lives and pursue their own happiness as the ultimate liberty.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

by

“Ex Machina” Movie Review – Article by Edward Hudgins

No comments yet

Categories: Culture, Fiction, Transhumanism, Tags: , , , , , , , , , , , , , , , , , , ,

The New Renaissance HatEdward Hudgins
July 3, 2015
******************************
ex-machina-review-objectivism

How will we know if an artificial intelligence actually attains a human level of consciousness?

As work in robotics and merging man and machine accelerates, we can expect more movies on this theme. Some, like Transcendence, will be dystopian warnings of potential dangers. Others, like Ex Machina, elicit serious thought about what it is to be human. Combining a good story and good acting, Ex Machina should interest technophiles and humanists alike.

The Turing Test

The film opens on Caleb Smith (Domhnall Gleeson) , a 27-year-old programmer at uber-search engine company Blue Book, who wins a lottery to spend a week at the isolated mountain home of the company’s reclusive genius creator, Nathan Bateman (Oscar Isaac). But the hard-drinking, eccentric Nathan tells Caleb that they’re not only going to hang out and get drunk.

He has created an android AI named Ava (Alicia Vikander) with a mostly woman-like, but part robot-like, appearance. The woman part is quite attractive. Nathan wants Caleb to spend the week administering the Turing Test to determine whether the AI shows intelligent behavior indistinguishable from that of a human. Normally this test is administered so the tester cannot see whether he’s dealing with a human and or machine. The test consists of exchanges of questions and answers, and is usually done in some written form. Since Caleb already knows Ava is an AI, he really needs to be convinced in his daily sessions with her, reviewed each evening with Nathan, that Nathan has created, in essence, a sentient, self-conscious human. It’s a high bar.

Android sexual attraction

Ava is kept locked in a room where her behavior can be monitored 24/7. Caleb talks to her through a glass, and at first he asks standard questions any good techie would ask to determine if she is human or machine. But soon Ava is showing a clear attraction to Caleb. The feeling is mutual.

In another session Ava is turning the tables. She wants to know about Caleb and be his friend. But during one of the temporary power outages that seems to plague Nathan’s house, when the monitoring devices are off, Ava tells Caleb that Nathan is not his friend and not to trust him. When the power comes back on, Ava reverts to chatting about getting to know Caleb.

In another session, when Ava reveals she’s never allowed out of the room, Caleb asks where she would choose to go if she could leave. She says to a busy traffic intersection. To people watch! Curiosity about humanity!

Ava then asks Caleb to close his eyes and she puts on a dress and wig to cover her robot parts. She looks fully human. She says she’d wear this if they went on a date. Nathan later explains that he gave Ava gender since no human is without one. That is part of human consciousness. Nathan also explains that he did not program her specifically to like Caleb. And he explains that she is fully sexually functional.

A human form of awareness

In another session Caleb tells Ava what she certainly suspects, that he is testing her. To communicate what he’s looking for, he offers the “Mary in a Black and White Room” thought experiment. Mary has always lived in a room with no colors. All views of the outside world are through black and white monitors. But she understands everything about the physics of color and about how the human eyes and brain process color. But does she really “know” or “understand” color—the “qualia”—until she walks outside and actually sees the blue sky?

Is Ava’s imitation of the human level of consciousness or awareness analogous to Mary’s consciousness or awareness of color when in the black and white room, purely theoretical? Is Ava simply a machine, a non-conscious automaton running a program by which she mimics human emotions and traits?

Ava is concerned with what will happen if she does not pass the Turing test. Nathan later tells Caleb that he thinks the AI after Ava will be the one he’s aiming for. And what will happen to Ava? The program will be downloaded and the memories erased. Caleb understands that this means Ava’s death.

Who’s testing whom?

During a blackout, this one of Nathan in a drunken stupor, Caleb borrows Nathan’s passcard to access closed rooms, and he discovers some disturbing truths about what proceeded Ava and led to her creation.

In the next session, during a power outage, Ava and Caleb plan an escape from the facility. They plan to get Nathan drunk, change the lock codes on the doors, and get out at the next power outage.

But has Nathan caught on? On the day Caleb is scheduled to leave he tells Nathan that Ava has passed the Turing Test. But Nathan asks whether Caleb thinks Ava is just pretending to like Caleb in order to escape. If so, this would show human intelligence and would mean that Ava indeed has passed the test.

But who is testing and manipulating whom and to what end? The story takes a dramatic, shocking turn as the audience finds out who sees through whose lies and deceptions. Does Mary ever escape from the black and white room? Is Ava really conscious like a human?

What it means to be human

In this fascinating film, writer/director Alex Garland explores what it is to be human in terms of basic drives and desires. There is the desire to know, understand, and experience. There is the desire to love and be loved. There is the desire to be free to choose. And there is the love of life.

But to be human is also to be aware that others might block one from pursuing human goals, that others can be cruel, and they can lie and deceive. There is the recognition that one might need to use the same behavior in order to be human.

If thinkers like Singularity theorist Ray Kurzweil are right, AIs might be passing the Turing Test within a few decades. But even if they don’t, humans will more and more rely on technologies that could enhance our minds and capacities and extend our lives. As we do so, it will be even more important that we keep in mind what it is to be human and what is best about being human. Ex Machina will not only provide you with an entertaining evening at the movies; it will also help you use that very human capacity, the imagination, to prepare your mind to meet these challenges.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

by

Google, Entrepreneurs, and Living 500 Years – Article by Edward Hudgins

1 comment

Categories: Business, Philosophy, Science, Transhumanism, Tags: , , , , , , , , , , , , , , ,

The New Renaissance Hat
Edward Hudgins
March 29, 2015
******************************

“Is it possible to live to be 500?”

“Yes,” answers Bill Maris of Google, without qualifications.

A Bloomberg Markets piece on “Google Ventures and the Search for Immortality” documents how the billions of dollars Maris invests each year is transforming life itself. But the piece also makes clear that the most valuable asset he possesses —and that, in others, makes those billions work—is entrepreneurship.

Google’s Bio-Frontiers

Maris, who heads a venture capital fund set up by Google, studied neuroscience in college. So perhaps it is no surprise that he has invested over one-third of the fund’s billions in health and life sciences. Maris has been influenced by futurist and serial inventor Ray Kurzweil who predicts that by 2045 humans and machines will merge, radically transforming and extending human life, perhaps indefinitely. Google has hired Kurzweil to carry on his work towards what he calls this “singularity.”

Maris was instrumental in creating Calico, a Google company that seeks nothing less than to cure aging, that is, to defeat death itself.  This and other companies in which Maris directs funds have specific projects to bring about this goal, from genetic research to analyzing cancer data.

Maris observes that “There are a lot of billionaires in Silicon Valley, but in the end, we are all heading for the same place. If given the choice between making a lot of money or finding a way to live longer, what do you choose?”

Google Ventures does not restrict its investments to life sciences. For example, it helped with the Uber car service and has put money into data management and home automation tech companies.

“Entrepreneuring” tomorrow

Perhaps the most important take-away from the Bloomberg article is the “why” behind Maris’s efforts. The piece states that “A company with $66 billion in annual revenue isn’t doing this for the money. What Google needs is entrepreneurs.” And that is what Maris and Google Ventures are looking for.

They seek innovators with new, transformative and, ultimately, profitable ideas and visions. Most important, they seek those who have the strategies and the individual qualities that will allow them to build their companies and make real their visions.

Entrepreneurial life

But entrepreneurship is not just a formula for successful start-ups. It is a concept that is crucial for the kind of future that Google and Maris want to bring about, beyond the crucial projects of any given entrepreneur.

Entrepreneurs love their work. They aim at productive achievement. They are individualists who act on the judgments of their own minds. And they take full responsibility for all aspects of their enterprises.

On this model, all individuals should treat their own lives as their own entrepreneurial opportunities. They should love their lives. They should aim at happiness and flourishing—their big profit!—through productive achievement. They should act on the judgments of their own minds. And they should take full responsibility for every aspect of their lives.

And this entrepreneurial morality must define the culture of America and the world if the future is to be the bright one at which Google and Maris aim. An enterprise worthy of a Google investment would seek to promote this morality throughout the culture. It would seek strategies to replace cynicism and a sense of personal impotence and social decline with optimism and a recognition of personal efficacy and the possibility of social progress.

So let’s be inspired by Google’s efforts to change the world, and let’s help promote the entrepreneurial morality that is necessary for bringing it about.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

by

Gennady Stolyarov II Interviewed on Transhumanism by Rebecca Savastio of Guardian Liberty Voice

No comments yet

Categories: Technology, Transhumanism, Tags: , , , , , , , , , , , , , ,

The New Renaissance Hat
G. Stolyarov II
May 26, 2014
******************************
Rebecca Savastio of Guardian Liberty Voice has published an excellent interview with me, which mentions Death is Wrong in its introduction and delves into various questions surrounding transhumanism and emerging technologies. In my responses, I also make reference to writings by Ray Kurzweil, Max More, Julian Simon, and Singularity Utopia. Additionally, I cite my 2010 essay, “How Can I Live Forever: What Does and Does Not Preserve the Self“.
***
I was pleased to be able to advocate in favor of transformative technological progress on multiple fronts.
***
Read Ms. Savastio’s article containing the interview: “Gennady Stolyarov on Transhumanism, Google Glass, Kurzweil, and Singularity“.

by

The Slowly Spreading Realization That Aging Can Be Defeated – Article by Reason

No comments yet

Categories: Science, Self-Improvement, Transhumanism, Tags: , , , , , , , , , , , , ,

The New Renaissance Hat
Reason
May 21, 2014
Recommend this page.
******************************

At some point in the next ten to twenty years the public at large, consisting of people who pay little attention to the ins and outs of progress in medicine, will start to wake up to realize that much longer healthy lives have become a possibility for the near future. The preliminaries to this grand awakening have been underway for a while, gradually, and will continue that way for a while longer. A few people every day in ordinary walks of life notice that, hey, a lot of scientists are talking about greatly extending human life spans these days, and, oh look, large sums of money are floating around to back this aim. There will be a slow dawning of realization, one floating light bulb at a time, as the concept of radical life extension is shifted in another brain from the “science fiction” bucket to the “science fact” bucket.

Some folk will then go back to what they were doing. Others will catch the fever and become advocates. A tiny few will donate funds in support of research or pressure politicians to do the same. Since we live in an age of pervasive communication, we see this process as it occurs. Many people are all to happy to share their realizations on a regular basis, and in this brave new world everyone can be a publisher in their own right.

Here is an example that I stumbled over today; a fellow with a day-to-day focus in a completely unrelated industry took notice and thought enough of what is going on in aging research to talk about it. He is still skeptical, but not to the point of dismissing the current state and prospects for longevity science out of hand: he can see that this is actionable, important knowledge.

What if de Grey and Kurzweil are half right?

Quote:

I think these guys – and the whole movement to conquer aging – is fascinating. I am highly skeptical of the claims, however. Optimism is all well and good, and I have no off-hand holes to poke in their (very) well-articulated arguments. But at the same time, biology is fiendishly complex, the expectations beyond fantastical.

Still though, I have to wonder: What if guys like de Grey and Kurzweil are half right, or even just partially right? What if, 30 years from now, it becomes physically impossible to tell a 30-year-old from a 70-year-old by physical appearance alone? It sounds nutty. But it’s a lot less nuttier, and a lot closer to the realm of possibility, than living to 1,000 – which, again, some very smart people have taken into their heads as an achievable thing.

People who don’t take care of themselves are insane. Ok, not actually “insane.” But seriously, given the potential rewards AND the risks, not taking care of your body and mind – not treating both with the utmost respect and care – seems absolutely nuts. At the poker table I see these young kids whose bodies are already turning to mush, and a part of me just wants to grab them by the shirt collar and say “Dudes! What the hell is WRONG with you!!!”

If it is possible – just realistically possible, mind you – that I could still be kicking ass and taking names at 125 years old, then I want to be working as hard as I can to preserve and maintain my equipment here and now. No matter what miracles medical science will achieve in future, working from the strongest, healthiest base possible will always improve the potential results, perhaps by an order of magnitude. Individuals who go into old age with fit, healthy bodies and sound minds, and longstanding habits to maintain both, may find potential for extended performance at truly high quality of life that was never before imaginable.

As the foundations of rejuvenation biotechnology are assembled and institutions like the SENS Research Foundation continue to win allies in the research community and beyond, the number of people experiencing this sort of epiphany will grow. The more the better and the sooner the better, as widespread support for the cause of defeating aging through medical science is necessary for more rapid progress: large scale funding always arrives late to the game, attracted by popular sentiment. The faster we get to that point, the greater our chances of living to benefit from the first working rejuvenation treatments.

Reason is the founder of The Longevity Meme (now Fight Aging!). He saw the need for The Longevity Meme in late 2000, after spending a number of years searching for the most useful contribution he could make to the future of healthy life extension. When not advancing the Longevity Meme or Fight Aging!, Reason works as a technologist in a variety of industries. 
***

This work is reproduced here in accord with a Creative Commons Attribution license. It was originally published on FightAging.org.

by

Guide to Talking about Immortality – Article by Wendy Hou

No comments yet

Categories: Education, Transhumanism, Tags: , , , , , , , , , , , , , ,

The New Renaissance Hat
Wendy Hou
April 1, 2014
******************************

Introduction

Wobster’s List of Words to Avoid

A Non-Threatening Script (Faith-Friendly!)

FAQs

Introduction

Death is natural. Death gives life meaning. Nothing would be meaningful if you lived forever. You’ll be bored of living. Immortality comes through what we leave behind. You live on in your children. Immortality would only be available to the wealthy. You’ll cause class warfare. Earth would run out of resources. People would stop having children. You should overcome your fear of death so you can live more fully.

A discussion about potential immortality is among the most frustrating conversations a rationalist will ever have. Nowhere else is the response so uniform, uniformly hostile, and boringly predictable. While a more intelligent or more educated person generally makes for a better discussion, that doesn’t seem to make any difference here.

Meet Generic Gerry. This is an ordinary person with an ordinary upbringing, uploaded with our society’s typical views on death. Here are my tips for talking to Generic Gerry. I hope it will be useful to you, so perhaps you can skip that pointless swirl and have a more fruitful discussion.

Wobster’s List of Words to Avoid

To begin with, here are some words you shouldn’t say.

  1. Immortal / immortality / live forever

This is number 1 for a reason! When you say “immortal”, you’re thinking of reading books and making art and enjoying the company of loved ones. You know what Gerry is thinking? Voldemort. Or perhaps the wicked stepmother in Tangled. Or perhaps the Flying Dutchman. Literature has not been kind. Let’s just skip the part where Gerry calls you selfish and accuses you of sacrificing others for yourself.

  1. Transhumanism

“Oh, like Ray Kurzweil!” Generic Gerry knows exactly one transhumanist, Ray Kurzweil. And (while Mr. Kurzweil is an excellent and inspiring person) Gerry thinks he’s crazy. Unfortunately, Gerry hasn’t actually met Mr. Kurzweil, only heard stories. Secondhand. They’ve become distorted along the way. “He takes 1000 vitamins and wants to bring back his father’s voice in a box!”

  1. Cryonics

Another topic that’s treated unfairly in the media. At best, Gerry thinks cryonics is weird; at worst, a cowardly scam. We don’t need those negative feelings here.

  1. Singularity / AI

Not directly relevant here, and kind of scary to Generic Gerry, who’s not super excited about computers taking over the world.

These are all buzzwords. They are like light switches in a room or buttons in a psyche. The moment you say “immortality”, you are no longer talking to an agent. You are now talking to an NPC. NPCs are all about programming. Their thinking switches off while their programming switches on, and out of their mouths comes a whole culture’s worth of social platitudes, all in one big defensive stream.

That’s why it’s always the same conversation.

A Non-Threatening Script (Faith-Friendly!)

Since talking about “not dying” makes Generic Gerry raise up the defensive shields, I like to talk about “not dying without consent.”

  1. Begin with something anyone can agree with.

“Doesn’t it suck when people die of cancer at the age of 40 with two young kids? Or when they die slowly of Alzheimers?”

  1. Link to aging.

“If we could fix these aging-related problems, people wouldn’t get cancer when they get older anymore. They would stay healthy and active.”

  1. Introduce the vision.

“Instead of dying from cancer before they are ready, they can live out all their dreams and read all the books they want.”

  1. Stick close to the cultural norm.

“Then, when decide they are ready, they can set up their affairs, get their finances in order, and die surrounded by family and friends.”

Of course, there will always be new books to read, and maybe you’d never decide you are ready to die, but you don’t have to say it. Leave Gerry to come to that conclusion.

It works even with the religious who want to be with their god or their eternal family someday. Most would object to never dying, but some do appreciate more control over when and how.

It’s important to remember you won’t change Gerry’s mind overnight. Gerry will have to think about it over weeks and months, maybe even years. Your goal is to crack the gates open. If Gerry rejects immortality, that gate is slammed shut. But if Gerry expresses interest in choosing the timing and circumstances of death, you’ve got your foot in the door! Gerry will not be openly hostile to discussing aging research with you. Perhaps Gerry will even be interested in the research or excited about advances. And for a first conversation, that’s the best you can hope for.

FAQs

I’ve heard every one of these way too many times. In all likelihood, so have you.

  • I want to go to heaven.

It will always be trivially easy to die. You’ll just get to choose when you’re ready. You won’t have to die unexpectedly at the age of 60 wishing you could watch your grandchild grow up.

  • If you’re afraid to die, you’re not really living.

Unfortunately, you are thinking of Voldemort, a character so afraid to die he never truly lived. Voldemort is also fiction. In real life, I’m more like a person who eats healthy to avoid heart disease.

  • Won’t living forever get boring?

Not in the first 1000 years, no. After that, you can choose to die if it’s boring.

  • When people are old, they are ready to die.

Seeing as a 22% of all healthcare costs are incurred in the last year of life, no they aren’t. But even if they were. . . .

When people are old, they are also tired, achy, and frail. Would they still be ready if they were healthy, fit, and active? Perhaps the real age when they’d be ready is 200 or 1000. We don’t know.

  • Would it be available to everyone or just the wealthy?

Short answer: It will be available to everyone.

Long answer: Even today, vaccines aren’t readily available in Africa. But we don’t grab our pitchforks, yelling “Down with vaccines!” In the US, cancer treatments are still limited to those who can afford them. Chemotherapy started with Eva Peron before reaching the rest of Argentina. Life extension will begin with the wealthy, too. One day, it will reach everyone. Those who care can help fund life extension for the poor, or better yet, donate to research to make the life-extension techniques cheaper and better.

  • How will Earth support all those people?

That’s something we’ll have to figure out. Perhaps we could mine asteroids for resources or grow food on space stations. We might need to have fewer children until we can support them. What we don’t do is let the elderly die for resources, not even now.

  • Death is but the next great adventure.

That’s your belief, and you can choose it for yourself, but please don’t choose that path for me.

Wendy Hou is a programmer, mathematics instructor, and life-extension supporter.

by

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

No comments yet

Categories: Philosophy, Technology, Transhumanism, Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer'” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Mr. Stolyarov refutes Adams’s equation of transhumanism with destructive mind uploading and explains that advocacy of mind uploading is neither a necessary nor a sufficient component of transhumanism.

References
– “Transhumanism and Mind Uploading Are Not the Same” – Essay by G. Stolyarov II
– “Transhumanism debunked: Why drinking the Kurzweil Kool-Aid will only make you dead, not immortal” – Mike Adams – NaturalNews.com – June 25, 2013
SENS Research Foundation
– “Nanomedicine” – Wikipedia
– “Transhumanism: Towards a Futurist Philosophy” – Essay by Max More
2045 Initiative Website
Bebionic Website
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II
– “Immortality: Bio or Techno?” – Essay by Franco Cortese

by

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

No comments yet

Categories: Philosophy, Technology, Transhumanism, Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

The New Renaissance Hat
Franco Cortese
July 14, 2013
Recommend this page.
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

1 2 3