Browsed by
Tag: singularity

Guide to Talking about Immortality – Article by Wendy Hou

Guide to Talking about Immortality – Article by Wendy Hou

The New Renaissance Hat
Wendy Hou
April 1, 2014
******************************

Introduction

Wobster’s List of Words to Avoid

A Non-Threatening Script (Faith-Friendly!)

FAQs

Introduction

Death is natural. Death gives life meaning. Nothing would be meaningful if you lived forever. You’ll be bored of living. Immortality comes through what we leave behind. You live on in your children. Immortality would only be available to the wealthy. You’ll cause class warfare. Earth would run out of resources. People would stop having children. You should overcome your fear of death so you can live more fully.

A discussion about potential immortality is among the most frustrating conversations a rationalist will ever have. Nowhere else is the response so uniform, uniformly hostile, and boringly predictable. While a more intelligent or more educated person generally makes for a better discussion, that doesn’t seem to make any difference here.

Meet Generic Gerry. This is an ordinary person with an ordinary upbringing, uploaded with our society’s typical views on death. Here are my tips for talking to Generic Gerry. I hope it will be useful to you, so perhaps you can skip that pointless swirl and have a more fruitful discussion.

Wobster’s List of Words to Avoid

To begin with, here are some words you shouldn’t say.

  1. Immortal / immortality / live forever

This is number 1 for a reason! When you say “immortal”, you’re thinking of reading books and making art and enjoying the company of loved ones. You know what Gerry is thinking? Voldemort. Or perhaps the wicked stepmother in Tangled. Or perhaps the Flying Dutchman. Literature has not been kind. Let’s just skip the part where Gerry calls you selfish and accuses you of sacrificing others for yourself.

  1. Transhumanism

“Oh, like Ray Kurzweil!” Generic Gerry knows exactly one transhumanist, Ray Kurzweil. And (while Mr. Kurzweil is an excellent and inspiring person) Gerry thinks he’s crazy. Unfortunately, Gerry hasn’t actually met Mr. Kurzweil, only heard stories. Secondhand. They’ve become distorted along the way. “He takes 1000 vitamins and wants to bring back his father’s voice in a box!”

  1. Cryonics

Another topic that’s treated unfairly in the media. At best, Gerry thinks cryonics is weird; at worst, a cowardly scam. We don’t need those negative feelings here.

  1. Singularity / AI

Not directly relevant here, and kind of scary to Generic Gerry, who’s not super excited about computers taking over the world.

These are all buzzwords. They are like light switches in a room or buttons in a psyche. The moment you say “immortality”, you are no longer talking to an agent. You are now talking to an NPC. NPCs are all about programming. Their thinking switches off while their programming switches on, and out of their mouths comes a whole culture’s worth of social platitudes, all in one big defensive stream.

That’s why it’s always the same conversation.

A Non-Threatening Script (Faith-Friendly!)

Since talking about “not dying” makes Generic Gerry raise up the defensive shields, I like to talk about “not dying without consent.”

  1. Begin with something anyone can agree with.

“Doesn’t it suck when people die of cancer at the age of 40 with two young kids? Or when they die slowly of Alzheimers?”

  1. Link to aging.

“If we could fix these aging-related problems, people wouldn’t get cancer when they get older anymore. They would stay healthy and active.”

  1. Introduce the vision.

“Instead of dying from cancer before they are ready, they can live out all their dreams and read all the books they want.”

  1. Stick close to the cultural norm.

“Then, when decide they are ready, they can set up their affairs, get their finances in order, and die surrounded by family and friends.”

Of course, there will always be new books to read, and maybe you’d never decide you are ready to die, but you don’t have to say it. Leave Gerry to come to that conclusion.

It works even with the religious who want to be with their god or their eternal family someday. Most would object to never dying, but some do appreciate more control over when and how.

It’s important to remember you won’t change Gerry’s mind overnight. Gerry will have to think about it over weeks and months, maybe even years. Your goal is to crack the gates open. If Gerry rejects immortality, that gate is slammed shut. But if Gerry expresses interest in choosing the timing and circumstances of death, you’ve got your foot in the door! Gerry will not be openly hostile to discussing aging research with you. Perhaps Gerry will even be interested in the research or excited about advances. And for a first conversation, that’s the best you can hope for.

FAQs

I’ve heard every one of these way too many times. In all likelihood, so have you.

  • I want to go to heaven.

It will always be trivially easy to die. You’ll just get to choose when you’re ready. You won’t have to die unexpectedly at the age of 60 wishing you could watch your grandchild grow up.

  • If you’re afraid to die, you’re not really living.

Unfortunately, you are thinking of Voldemort, a character so afraid to die he never truly lived. Voldemort is also fiction. In real life, I’m more like a person who eats healthy to avoid heart disease.

  • Won’t living forever get boring?

Not in the first 1000 years, no. After that, you can choose to die if it’s boring.

  • When people are old, they are ready to die.

Seeing as a 22% of all healthcare costs are incurred in the last year of life, no they aren’t. But even if they were. . . .

When people are old, they are also tired, achy, and frail. Would they still be ready if they were healthy, fit, and active? Perhaps the real age when they’d be ready is 200 or 1000. We don’t know.

  • Would it be available to everyone or just the wealthy?

Short answer: It will be available to everyone.

Long answer: Even today, vaccines aren’t readily available in Africa. But we don’t grab our pitchforks, yelling “Down with vaccines!” In the US, cancer treatments are still limited to those who can afford them. Chemotherapy started with Eva Peron before reaching the rest of Argentina. Life extension will begin with the wealthy, too. One day, it will reach everyone. Those who care can help fund life extension for the poor, or better yet, donate to research to make the life-extension techniques cheaper and better.

  • How will Earth support all those people?

That’s something we’ll have to figure out. Perhaps we could mine asteroids for resources or grow food on space stations. We might need to have fewer children until we can support them. What we don’t do is let the elderly die for resources, not even now.

  • Death is but the next great adventure.

That’s your belief, and you can choose it for yourself, but please don’t choose that path for me.

Wendy Hou is a programmer, mathematics instructor, and life-extension supporter.

Technological Singularities: An Overview – Video by G. Stolyarov II

Technological Singularities: An Overview – Video by G. Stolyarov II

Mr. Stolyarov explains the basic concept of a technological Singularity and his understanding that humankind has already experienced three such Singularities in the form of the Agricultural, Industrial, and Information Revolutions. The next Singularity will come about due to a convergence of technologies such as artificial intelligence, nanotechnology, and biotechnology (including indefinite life extension).

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 30, 2013
******************************
One of the most common arguments made against Transhumanism, Technoprogressivism, and the transformative potentials of emerging, converging, disruptive and transformative technologies may also be the weakest: technical infeasibility. While some thinkers attack the veracity of Transhumanist claims on moral grounds, arguing that we are committing a transgression against human dignity (in turn often based on ontological grounds of a static human nature that shan’t be tampered with) or on grounds of safety, arguing that humanity isn’t responsible enough to wield such technologies without unleashing their destructive capabilities, these categories of counter-argument (efficacy and safety, respectively) are more often than not made by people somewhat more familiar with the community and its common points of rhetoric.
***
In other words these are the real salient and significant problems needing to be addressed by Transhumanist and Technoprogressive communities. The good news is that the ones making the most progress in terms of deliberating the possible repercussions of emerging technologies are Transhumanist and Technoprogressive communities. The large majority of thinkers and theoreticians working on Existential Risk and Global Catastrophic Risk, like The Future of Humanity Institute and the Lifeboat Foundation, share Technoprogressive inclinations. Meanwhile, the largest proponents of the need to ensure wide availability of enhancement technologies, as well as the need for provision of personhood rights to non-biologically-substrated persons, are found amidst the ranks of Technoprogressive Think Tanks like the IEET.
***

A more frequent Anti-Transhumanist and Anti-Technoprogressive counter-argument, by contrast, and one most often launched by people approaching Transhumanist and Technoprogressive communities from the outside, with little familiarity with their common points of rhetoric, is the claim of technical infeasibility based upon little more than sheer incredulity.

Sometimes a concept or notion simply seems too unprecedented to be possible. But it’s just too easy for us to get stuck in a spacetime rut along the continuum of culture and feel that if something were possible, it would have either already happened or would be in the final stages of completion today. “If something is possible, when why hasn’t anyone done it Shouldn’t the fact that it has yet to be accomplished indicate that it isn’t possible?” This conflates ought with is (which Hume showed us is a fallacy) and ought with can. Ought is not necessarily correlative with either. At the risk of saying the laughably-obvious, something must occur at some point in order for it to occur at all. The Moon landing happened in 1969 because it happened in 1969, and to have argued in 1968 that it simply wasn’t possible solely because it had never been done before would not have been  a valid argument for its technical infeasibility.

If history has shown us anything, it has shown us that history is a fantastically poor indicator of what will and will not become feasible in the future. Statistically speaking, it seems as though the majority of things that were said to be impossible to implement via technology have nonetheless come into being. Likewise, it seems as though the majority of feats it was said to be possible to facilitate via technology have also come into being. The ability to possiblize the seemingly impossible via technological and methodological in(ter)vention has been exemplified throughout the course of human history so prominently that we might as well consider it a statistical law.

We can feel the sheer fallibility of the infeasibility-from-incredulity argument intuitively when we consider how credible it would have seemed a mere 100 years ago to claim that we would soon be able to send sentences into the air, to be routed to a device in your pocket (and only your pocket, not the device in the pocket of the person sitting right beside you). How likely would it have seemed 200 years ago if you claimed that 200 years hence it would be possible to sit comfortably and quietly in a chair in the sky, inside a large tube of metal that fails to fall fatally to the ground?

Simply look around you. An idiosyncratic genus of great ape did this! Consider how remarkably absurd it would seem for the gorilla genus to have coordinated their efforts to build skyscrapers; to engineer devices that took them to the Moon; to be able to send a warning or mating call to the other side of the earth in less time than such a call could actually be made via physical vocal cords. We live in a world of artificial wonder, and act as though it were the most mundane thing in the world. But considered in terms of geological time, the unprecedented feat of culture and artificial artifact just happened. We are still in the fledging infancy of the future, which only began when we began making it ourselves.
***

We have no reason whatsoever to doubt the eventual technological feasibility of anything, really, when we consider all the things that were said to be impossible yet happened, all the things that were said to be possible and did happen, and all the things that were unforeseen completely yet happened nonetheless. In light of history, it seems more likely than a given thing would eventually be possible via technology than that it wouldn’t ever be possible. I fully appreciate the grandeur of this claim – but I stand by it nonetheless. To claim that a given ability will probably not be eventually possible to implement via technology is to laugh in the face of history to some extent.

The main exceptions to this claim are abilities wherein you limit or specify the route of implementation. Thus it probably would not be eventually possible to, say, infer the states of all the atoms comprising the Eifel Tower from the state of a single atom in your fingernail: categories of ability where you specify the implementation as the end-ability – as in the case above, the end ability was to infer the state of all the atoms in the Eifel Tower from the state of a single atom.

These exceptions also serve to illustrate the paramount feature allowing technology to possiblize the seemingly improbable: novel means of implementation. Very often there is a bottleneck in the current system we use to accomplish something that limits the scope of tis abilities and prevents certain objectives from being facilitated by it. In such cases a whole new paradigm of approach is what moves progress forward to realizing that objective. If the goal is the reversal and indefinite remediation of the causes and sources of aging, the paradigms of medicine available at the turn of the 20th century would have seemed to be unable to accomplish such a feat.

The new paradigm of biotechnology and genetic engineering was needed to formulate a scientifically plausible route to the reversal of aging-correlated molecular damage – a paradigm somewhat non-inherent in the medical paradigms and practices common at the turn of the 20th Century. It is the notion of a new route to implementation, a wholly novel way of making the changes that could lead to a given desired objective, that constitutes the real ability-actualizing capacity of technology – and one that such cases of specified implementation fail to take account of.

One might think that there are other clear exceptions to this as well: devices or abilities that contradict the laws of physics as we currently understand them – e.g., perpetual-motion machines. Yet even here we see many historical antecedents exemplifying our short-sighted foresight in regard to “the laws of physics”. Our understanding of the physical “laws” of the universe undergo massive upheaval from generation to generation. Thomas Kuhn’s The Structure of Scientific Revolutions challenged the predominant view that scientific progress occurred by accumulated development and discovery when he argued that scientific progress is instead driven by the rise of new conceptual paradigms categorically dissimilar to those that preceded it (Kuhn, 1962), and which then define the new predominant directions in research, development, and discovery in almost all areas of scientific discovery and conceptualization.

Kuhn’s insight can be seen to be paralleled by the recent rise in popularity of Singularitarianism, which today seems to have lost its strict association with I.J. Good‘s posited type of intelligence explosion created via recursively self-modifying strong AI, and now seems to encompass any vision of a profound transformation of humanity or society through technological growth, and the introduction of truly disruptive emerging and converging (e.g., NBIC) technologies.

This epistemic paradigm holds that the future is less determined by the smooth progression of existing trends and more by the massive impact of specific technologies and occurrences – the revolution of innovation. Kurzweil’s own version of Singularitarianism (Kurzweil, 2005) uses the systemic progression of trends in order to predict a state of affairs created by the convergence of such trends, wherein the predictable progression of trends points to their own destruction in a sense, as the trends culminate in our inability to predict past that point. We can predict that there are factors that will significantly impede our predictive ability thereafter. Kurzweil’s and Kuhn’s thinking are also paralleled by Buckminster Fuller in his notion of ephemeralization (i.e., doing more with less), the post-industrial information economies and socioeconomic paradigms described by Alvin Toffler (Toffler, 1970), John Naisbitt (Naisbitt 1982), and Daniel Bell (Bell, 1973), among others.

It can also partly be seen to be inherent in almost all formulations of technological determinism, especially variants of what I call reciprocal technological determinism (not simply that technology determines or largely constitutes the determining factors of societal states of affairs, not simply that tech affects culture, but rather than culture affects technology which then affects culture which then affects technology) a là Marshall McLuhan (McLuhan, 1964) . This broad epistemic paradigm, wherein the state of progress is more determined by small but radically disruptive changes, innovation, and deviations rather than the continuation or convergence of smooth and slow-changing trends, can be seen to be inherent in variants of technological determinism because technology is ipso facto (or by its very defining attributes) categorically new and paradigmically disruptive, and if culture is affected significantly by technology, then it is also affected by punctuated instances of unintended radical innovation untended by trends.

That being said, as Kurzweil has noted, a given technological paradigm “grows out of” the paradigm preceding it, and so the extents and conditions of a given paradigm will to some extent determine the conditions and allowances of the next paradigm. But that is not to say that they are predictable; they may be inherent while still remaining non-apparent. After all, the increasing trend of mechanical components’ increasing miniaturization could be seen hundreds of years ago (e.g., Babbage knew that the mechanical precision available via the manufacturing paradigms of his time would impede his ability in realizing his Baggage Engine, but that its implementation would one day be possible by the trend of increasingly precise manufacturing standards), but the fact that it could continue to culminate in the ephemeralization of Bucky Fuller (Fuller, 1976) or the mechanosynthesis of K. Eric Drexler (Drexler, 1986).

Moreover, the types of occurrence allowed by a given scientific or methodological paradigm seem at least intuitively to expand, rather than contract, as we move forward through history. This can be seen lucidly in the rise of Quantum Physics in the early 20th Century, which delivered such conceptual affronts to our intuitive notions of the possible as non-locality (i.e., quantum entanglement – and with it quantum information teleportation and even quantum energy teleportation, or in other words faster-than-light causal correlation between spatially separated physical entities), Einstein’s theory of relativity (which implied such counter-intuitive notions as measurement of quantities being relative to the velocity of the observer, e.g., the passing of time as measured by clocks will be different in space than on earth), and the hidden-variable theory of David Bohm (which implied such notions as the velocity of any one particle being determined by the configuration of the entire universe). These notions belligerently contradict what we feel intuitively to be possible. Here we have claims that such strange abilities as informational and energetic teleportation, faster-than-light causality (or at least faster-than-light correlation of physical and/or informational states) and spacetime dilation are natural, non-technological properties and abilities of the physical universe.

Technology is Man’s foremost mediator of change; it is by and large through the use of technology that we expand the parameters of the possible. This is why the fact that these seemingly fantastic feats were claimed to be possible “naturally”, without technological implementation or mediation, is so significant. The notion that they are possible without technology makes them all the more fantastical and intuitively improbable.

We also sometimes forget the even more fantastic claims of what can be done through the use of technology, such as stellar engineering and mega-scale engineering, made by some of big names in science. There is the Dyson Sphere of Freeman Dyson, which details a technological method of harnessing potentially the entire energetic output of a star (Dyson,  1960). One can also find speculation made by Dyson concerning the ability for “life and communication [to] continue for ever, using a finite store of energy” in an open universe by utilizing smaller and smaller amounts of energy to power slower and slower computationally emulated instances of thought (Dyson, 1979).

There is the Tipler Cylinder (also called the Tipler Time Machine) of Frank J. Tipler, which described a dense cylinder of infinite length rotating about its longitudinal axis to create closed timelike curves (Tipler, 1974). While Tipler speculated that a cylinder of finite length could produce the same effect if rotated fast enough, he didn’t provide a mathematical solution for this second claim. There is also speculation by Tipler on the ability to utilize energy harnessed from gravitational shear created by the forced collapse of the universe at different rates and different directions, which he argues would allow the universe’s computational capacity to diverge to infinity, essentially providing computationally emulated humans and civilizations the ability to run for an infinite duration of subjective time (Tipler, 1986, 1997).

We see such feats of technological grandeur paralleled by Kurt Gödel, who produced an exact solution to the Einstein field equations that describes a cosmological model of a rotating universe (Gödel, 1949). While cosmological evidence (e.g., suggesting that our universe is not a rotating one) indicates that his solution doesn’t describe the universe we live in, it nonetheless constitutes a hypothetically possible cosmology in which time-travel (again, via a closed timelike curve) is possible. And because closed timelike curves seem to require large amounts of acceleration – i.e. amounts not attainable without the use of technology – Gödel’s case constitutes a hypothetical cosmological model allowing for technological time-travel (which might be non-obvious, since Gödel’s case doesn’t involve such technological feats as a rotating cylinder of infinite length, rather being a result derived from specific physical and cosmological – i.e., non-technological – constants and properties).

These are large claims made by large names in science (i.e., people who do not make claims frivolously, and in most cases require quantitative indications of their possibility, often in the form of mathematical solutions, as in the cases mentioned above) and all of which are made possible solely through the use of technology. Such technological feats as the computational emulation of the human nervous system and the technological eradication of involuntary death pale in comparison to the sheer grandeur of the claims and conceptualizations outlined above.

We live in a very strange universe, which is easy to forget midst our feigned mundanity. We have no excuse to express incredulity at Transhumanist and Technoprogressive conceptualizations considering how stoically we accept such notions as the existence of sentient matter (i.e., biological intelligence) or the ability of a genus of great ape to stand on extraterrestrial land.

Thus, one of the most common counter-arguments launched at many Transhumanist and Technoprogressive claims and conceptualizations – namely, technical infeasibility based upon nothing more than incredulity and/or the lack of a definitive historical precedent – is one of the most baseless counter-arguments as well. It would be far more credible to argue for the technical infeasibility of a given endeavor within a certain time-frame. Not only do we have little, if any, indication that a given ability or endeavor will fail to eventually become realizable via technology given enough development-time, but we even have historical indication of the very antithesis of this claim, in the form of the many, many instances in which a given endeavor or feat was said to be impossible, only to be realized via technological mediation thereafter.

It is high time we accepted the fallibility of base incredulity and the infeasibility of the technical-infeasibility argument. I remain stoically incredulous at the audacity of fundamental incredulity, for nothing should be incredulous to man, who makes his own credibility in any case, and who is most at home in the necessary superfluous.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

References

Bell, D. (1973). “The Coming of Post-Industrial Society: A Venture in Social Forecasting, Daniel Bell.” New York: Basic Books, ISBN 0-465-01281-7.

Dyson, F. (1960) “Search for Artificial Stellar Sources of Infrared Radiation”. Science 131: 1667-1668.

Dyson, F. (1979). “Time without end: Physics and biology in an open universe,” Reviews of Modern Physics 51 (3): 447-460.

Fuller, R.B. (1938). “Nine Chains to the Moon.” Anchor Books pp. 252–59.

Gödel, K. (1949). “An example of a new type of cosmological solution of Einstein’s field equations of gravitation”. Rev. Mod. Phys. 21 (3): 447–450.

Kuhn, Thomas S. (1962). “The Structure of Scientific Revolutions (1st ed.).” University of Chicago Press. LCCN 62019621.

Kurzweil, R. (2005). “The Singularity is Near.” Penguin Books.

Mcluhan, M. (1964). “Understanding Media: The Extensions of Man”. 1st Ed. McGraw Hill, NY.

Niasbitt, J. (1982). “Megatrends.” Ten New Directions Transforming Our Lives. Warner Books.

Tipler, F. (1974) “Rotating Cylinders and Global Causality Violation”. Physical Review D9, 2203-2206.

Tipler, F. (1986). “Cosmological Limits on Computation”, International Journal of Theoretical Physics 25 (6): 617-661.

Tipler, F. (1997). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. New York: Doubleday. ISBN 0-385-46798-2.

Toffler, A. (1970). “Future shock.” New York: Random House.

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail – Video by G. Stolyarov II

Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail – Video by G. Stolyarov II

Doomsday predictions are not only silly but bring about harmful ways of approaching life and the world. Mr. Stolyarov expresses his view that there will never be an end of the world, an end of humanity, or an end of civilization. While some genuine existential risks do exist, most of them are not man-made, and even the man-made risks are largely in the past.

References

– “Transhumanism and the 2nd Law of Thermodynamics” – Video by G. Stolyarov II