Browsed by
Tag: subjective continuity

G. Stolyarov II and xpallodoc Discuss the Future – Video Interview

G. Stolyarov II and xpallodoc Discuss the Future – Video Interview

On November 30, 2014, Mr. Stolyarov was interviewed by YouTube user xpallodoc, and the wide-ranging discussion encompassed subjects from visions of the future, indefinite life extension and the concept of I-ness, the future of money and economies, technological progress, virtual worlds, political barriers to progress, artificial intelligence, marriage and family, and being part of the push toward radical abundance and technological breakthroughs within our lifetimes.

References
– “Individual Empowerment through Emerging Technologies: Virtual Tools for a Better Physical World” – Video by G. Stolyarov II
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Video by G. Stolyarov II

SENS or Cryonics?: My Answer to a Hypothetical Choice – Video by G. Stolyarov II

SENS or Cryonics?: My Answer to a Hypothetical Choice – Video by G. Stolyarov II

If Mr. Stolyarov had $1 billion to donate to life extension, would he donate it to SENS (Strategies for Engineered Negligible Senescence) or cryonics? Find out his answer.

References:
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Video by G. Stolyarov II
SENS Research Foundation Website
– “Kim Suozzi Cryogenically Preserved After Battle With Brain Cancer” – Huffington Post – January 22, 2013

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer'” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Mr. Stolyarov refutes Adams’s equation of transhumanism with destructive mind uploading and explains that advocacy of mind uploading is neither a necessary nor a sufficient component of transhumanism.

References
– “Transhumanism and Mind Uploading Are Not the Same” – Essay by G. Stolyarov II
– “Transhumanism debunked: Why drinking the Kurzweil Kool-Aid will only make you dead, not immortal” – Mike Adams – NaturalNews.com – June 25, 2013
SENS Research Foundation
– “Nanomedicine” – Wikipedia
– “Transhumanism: Towards a Futurist Philosophy” – Essay by Max More
2045 Initiative Website
Bebionic Website
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II
– “Immortality: Bio or Techno?” – Essay by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Transhumanism and Mind Uploading Are Not the Same – Article by G. Stolyarov II

Transhumanism and Mind Uploading Are Not the Same – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 10, 2013
******************************

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer’” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Adams goes so far as calling transhumanism a “death cult much like the infamous Heaven’s Gate cult led by Marshal Applewhite.”

I will not devote this essay to refuting any of Adams’s arguments against destructive mind uploading, because no serious transhumanist thinker of whom I am aware endorses the kind of procedure Adams uses as a straw man. For anyone who wishes to continue existing as an individual, uploading the contents of the mind to a computer and then killing the body is perhaps the most bizarrely counterproductive possible activity, short of old-fashioned suicide. Instead, Adams’s article – all the misrepresentations aside – offers the opportunity to make important distinctions of value to transhumanists.

First, having a positive view of mind uploading is neither necessary nor sufficient for being a transhumanist. Mind uploading has been posited as one of several routes toward indefinite human life extension. Other routes include the periodic repair of the existing biological organism (as outlined in Aubrey de Grey’s SENS project or as entailed in the concept of nanomedicine) and the augmentation of the biological organism with non-biological components (Ray Kurzweil’s actual view, as opposed to the absurd positions Adams attributes to him). Transhumanism, as a philosophy and a movement, embraces the lifting of the present limitations upon the human condition – limitations that arise out of the failures of human biology and unaltered physical nature. Max More, in “Transhumanism: Towards a Futurist Philosophy”, writes that “Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies such as neuroscience and neuropharmacology, life extension, nanotechnology, artificial ultraintelligence, and space habitation, combined with a rational philosophy and value system.” That Adams would take this immensity of interrelated concepts, techniques, and aspirations and equate it to destructive mind uploading is, plainly put, mind-boggling. There is ample room in transhumanism for a variety of approaches toward lifting the limitations of the human condition. Some of these approaches will be more successful than others, and no one approach is obligatory for those wishing to consider themselves transhumanists.

Moreover, Adams greatly misconstrues the positions of those transhumanists who do support mind uploading. For most such transhumanists, a digital existence is not seen as superior to their current biological existences, but as rather a necessary recourse if or when it becomes impossible to continue maintaining a biological existence. Dmitry Itskov’s 2045 Initiative is perhaps the most prominent example of the pursuit of mind uploading today. The aim of the initiative is to achieve cybernetic immortality in a stepwise fashion, through the creation of a sequence of avatars that gives the biological human an increasing amount of control over non-biological components. Avatar B, planned for circa 2020-2025, would involve a human brain controlling an artificial body. If successful, this avatar would prolong the existence of the biological brain when other components of the biological body have become too irreversibly damaged to support it. Avatar C, planned for circa 2030-2035, would involve the transfer of a human mind from a biological to a cybernetic brain, after the biological brain is no longer able to support life processes. There is no destruction intended in the 2045 Avatar Project Milestones, only preservation of some manner of intelligent functioning of a person whom the status quo would instead relegate to becoming food for worms. The choice between decomposition and any kind of avatar is a no-brainer (well, a brainer actually, for those who choose the latter).

Is Itskov’s path toward immortality the best one? I personally prefer SENS, combined with nanomedicine and piecewise artificial augmentations of the sort that are already beginning to occur (witness the amazing bebionic3 prosthetic hand). Itskov’s approach appears to assume that the technology for transferring the human mind to an entirely non-biological body will become available sooner than the technology for incrementally maintaining and fortifying the biological body to enable its indefinite continuation. My estimation is the reverse. Before scientists will be able to reverse-engineer not just the outward functions of a human brain but also its immensely complex and intricate internal structure, we will have within our grasp the ability to conquer an ever greater number of perils that befall the biological body and to repair the body using both biological and non-biological components.

The biggest hurdle for mind uploading to overcome is one that does not arise with the approach of maintaining the existing body and incrementally replacing defective components. This hurdle is the preservation of the individual’s unique and irreplaceable vantage point upon the world – his or her direct sense of being that person and no other. I term this direct vantage point an individual’s “I-ness”.  Franco Cortese, in his immensely rigorous and detailed conceptual writings on the subject, calls it “subjective-continuity” and devotes his attention to techniques that could achieve gradual replacement of biological neurons with artificial neurons in such a way that there is never a temporal or operational disconnect between the biological mind and its later cybernetic instantiation. Could the project of mind uploading pursue directions that would achieve the preservation of the “I-ness” of the biological person? I think this may be possible, but only if the resulting cybernetic mind is structurally analogous to the biological mind and, furthermore, maintains the temporal continuity of processes exhibited by an analog system, as opposed to a digital system’s discrete “on-off” states and the inability to perform multiple exactly simultaneous operations. Furthermore, only by developing the gradual-replacement approaches explored by Cortese could this prospect of continuing the same subjective experience (as opposed to simply creating a copy of the individual) be realized. But Adams, in his screed against mind uploading, seems to ignore all of these distinctions and explorations. Indeed, he appears to be oblivious of the fact that, yes, transhumanists have thought quite a bit about the philosophical questions involved in mind uploading. He seems to think that in mind uploading, you simply “copy the brain and paste it somewhere else” and hope that “somehow magically that other thing becomes ‘you.’” Again, no serious proponent of mind uploading – and, more generally, no serious thinker who has considered the subject – would hold this misconception.

Adams is wrong on a still further level, though. Not only is he wrong to equate transhumanism with mind uploading; not only is he wrong to declare all mind uploading to be destructive – he is also wrong to condemn the type of procedure that would simply make a non-destructive copy of an individual. This type of “backup” creation has indeed been advocated by transhumanists such as Ray Kurzweil. While a pure copy of one’s mind or its contents would not transfer one’s “I-ness” to a digital substrate and would not enable one to continue experiencing existence after a fatal illness or accident, it could definitely help an individual regain his memories in the event of brain damage or amnesia. Furthermore, if the biological individual were to irreversibly perish, such a copy would at least preserve vital information about the biological individual for the benefit of others. Furthermore, it could enable the biological individual’s influence upon the world to be more powerfully actualized by a copy that considers itself to have the biological individual’s memories, background, knowledge, and personality.  If we had with us today copies of the minds of Archimedes, Benjamin Franklin, and Nikola Tesla, we would certainly all benefit greatly from continued outpourings of technological and philosophical innovation.  The original geniuses would not know or care about this, since they would still be dead, but we, in our interactions with minds very much like theirs, would be immensely better off than we are with only their writings and past inventions at our disposal.

Yes, destructive digital copying of a mind would be a bafflingly absurd and morally troubling undertaking – but recognition of this is neither a criticism of transhumanism nor of any genuinely promising projects of mind uploading. Instead, it is simply a matter of common sense, a quality which Mike Adams would do well to acquire.

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 3, 2013
******************************
This essay is the tenth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first nine chapters were previously published on The Rational Argumentator under the following titles:
***

Operational Continuity

One of the reasons for continuing conceptual development of the physical-functionalist NRU (neuron-replication-unit) approach, despite the perceived advantages of the informational-functionalist approach, was in the event that computational emulation would either fail to successfully replicate a given physical process (thus a functional-modality concern) or fail to successfully maintain subjective-continuity (thus an operational-modality concern), most likely due to a difference in the physical operation of possible computational substrates compared to the physical operation of the brain (see Chapter 2). In regard to functionality, we might fail to computationally replicate (whether in simulation or emulation) a relevant physical process for reasons other than vitalism. We could fail to understand the underlying principles governing it, or we might understand its underlying principles so as to predictively model it yet still fail to understand how it affects the other processes occurring in the neuron—for instance if we used different modeling techniques or general model types to model each component, effectively being able to predictively model each individually while being unable to model how they affect eachother due to model untranslatability. Neither of these cases precludes the aspect in question from being completely material, and thus completely potentially explicable using the normative techniques we use to predictively model the universe. The physical-functionalist approach attempted to solve these potential problems through several NRU sub-classes, some of which kept certain biological features and functionally replaced certain others, and others that kept alternate biological features and likewise functionally replicated alternate biological features. These can be considered as varieties of biological-nonbiological NRU hybrids that functionally integrate those biological features into their own, predominantly non-biological operation, as they exist in the biological nervous system, which we failed to functionally or operationally replicate successfully.

The subjective-continuity problem, however, is not concerned with whether something can be functionally replicated but with whether it can be functionally replicated while still retaining subjective-continuity throughout the procedure.

This category of possible basis for subjective-continuity has stark similarities to the possible problematic aspects (i.e., operational discontinuity) of current computational paradigms and substrates discussed in Chapter 2. In that case it was postulated that discontinuity occurred as a result of taking something normally operationally continuous and making it discontinuous: namely, (a) the fact that current computational paradigms are serial (whereas the brain has massive parallelism), which may cause components to only be instantiated one at a time, and (b) the fact that the resting membrane potential of biological neurons makes them procedurally continuous—that is, when in a resting or inoperative state they are still both on and undergoing minor fluctuations—whereas normative logic gates both do not produce a steady voltage when in an inoperative state (thus being procedurally discontinuous) and do not undergo minor fluctuations within such a steady-state voltage (or, more generally, a continuous signal) while in an inoperative state. I had a similar fear in regard to some mathematical and computational models as I understood them in 2009: what if we were taking what was a continuous process in its biological environment, and—by using multiple elements or procedural (e.g., computational, algorithmic) steps to replicate what would have been one element or procedural step in the original—effectively making it discontinuous by introducing additional intermediate steps? Or would we simply be introducing a number of continuous steps—that is, if each element or procedural step were operationally continuous in the same way that the components of a neuron are, would it then preserve operational continuity nonetheless?

This led to my attempting to develop a modeling approach aiming to retain the same operational continuity as exists in biological neurons, which I will call the relationally isomorphic mathematical model. The biophysical processes comprising an existing neuron are what implements computation; by using biophysical-mathematical models as our modeling approach, we might be introducing an element of discontinuity by mathematically modeling the physical processes giving rise to a computation/calculation, rather than modeling the computation/calculation directly. It might be the difference between modeling a given program, and the physical processes comprising the logic elements giving rise to the program. Thus, my novel approach during this period was to explore ways to model this directly.

Rather than using a host of mathematical operations to model the physical components that themselves give rise to a different type of mathematics, we instead use a modeling approach that maintains a 1-to-1 element or procedural-step correspondence with the level-of-scale that embodies the salient (i.e., aimed-for) computation. My attempts at developing this produced the following approach, though I lack the pure mathematical and computer-science background to judge its true accuracy or utility. The components, their properties, and the inputs used for a given model (at whatever scale) are substituted by numerical values, the magnitude of which preserves the relationships (e.g., ratio relationships) between components/properties and inputs, and by mathematical operations which preserve the relationships exhibited by their interaction. For instance: if the interaction between a given component/property and a given input produces an emergent inhibitory effect biologically, then one would combine them to get their difference or their factors, respectively, depending on whether they exemplify a linear or nonlinear relationship. If the component/property and the input combine to produce emergently excitatory effects biologically, one would combine them to get their sum or products, respectively, depending on whether they increased excitation in a linear or nonlinear manner.

In an example from my notes, I tried to formulate how a chemical synapse could be modeled in this way. Neurotransmitters are given analog values such as positive or negative numbers, the sign of which (i.e., positive or negative) depends on whether it is excitatory or inhibitory and the magnitude of which depends on how much more excitatory/inhibitory it is than other neurotransmitters, all in reference to a baseline value (perhaps 0 if neutral or neither excitatory nor inhibitory; however, we may need to make this a negative value, considering that the neuron’s resting membrane-potential is electrically negative, and not electrochemically neutral). If they are neurotransmitter clusters, then one value would represent the neurotransmitter and another value its quantity, the sum or product of which represents the cluster. If the neurotransmitter clusters consist of multiple neurotransmitters, then two values (i.e., type and quantity) would be used for each, and the product of all values represents the cluster. Each summative-product value is given a second vector value separate from its state-value, representing its direction and speed in the 3D space of the synaptic junction. Thus by summing the products of all, the numerical value should contain the relational operations each value corresponds to, and the interactions and relationships represented by the first- and second-order products. The key lies in determining whether the relationship between two elements (e.g., two neurotransmitters) is linear (in which case they are summed), or nonlinear (in which case they are combined to produce a product), and whether it is a positive or negative relationship—in which case their factor, rather than their difference, or their product, rather than their sum, would be used. Combining the vector products would take into account how each cluster’s speed and position affects the end result, thus effectively emulating the process of diffusion across the synaptic junction. The model’s past states (which might need to be included in such a modeling methodology to account for synaptic plasticity—e.g., long-term potentiation and long-term modulation) would hypothetically be incorporated into the model via a temporal-vector value, wherein a third value (position along a temporal or “functional”/”operational” axis) is used when combining the values into a final summative product. This is similar to such modeling techniques as phase-space, which is a quantitative technique for modeling a given system’s “system-vector-states” or the functional/operational states it has the potential to possess.

How excitatory or inhibitory a given neurotransmitter is may depend upon other neurotransmitters already present in the synaptic junction; thus if the relationship between one neurotransmitter and another is not the same as that first neurotransmitter and an arbitrary third, then one cannot use static numerical values for them because the sequence in which they were released would affect how cumulatively excitatory or inhibitory a given synaptic transmission is.

A hypothetically possible case of this would be if one type of neurotransmitter can bond or react with two or more types of neurotransmitter. Let’s say that it’s more likely to bond or react with one than with the other. If the chemically less attractive (or reactive) one were released first, it would bond anyways due to the absence of the comparatively more chemically attractive one, such that if the more attractive one were released thereafter, then it wouldn’t bond because the original one would have already bonded with the chemically less attractive one.

If a given neurotransmitter’s numerical value or weighting is determined by its relation to other neurotransmitters (i.e., if one is excitatory, and another is twice as excitatory, then if the first was 1.5, the second would be 3—assuming a linear relationship), and a given neurotransmitter does prove to have a different relationship to one neurotransmitter than it does another, then we cannot use a single value for it. Thus we might not be able to configure it such that the normative mathematical operations follow naturally from each other; instead, we may have to computationally model (via the [hypothetically] subjectively discontinuous method that incurs additional procedural steps) which mathematical operations to perform, and then perform them continuously without having to stop and compute what comes next, so as to preserve subjective-continuity.

We could also run the subjectively discontinuous model at a faster speed to account for its higher quantity of steps/operations and the need to keep up with the relationally isomorphic mathematical model, which possesses comparatively fewer procedural steps. Thus subjective-continuity could hypothetically be achieved (given the validity of the present postulated basis for subjective-continuity—operational continuity) via this method of intermittent external intervention, even if we need extra computational steps to replicate the single informational transformations and signal-combinations of the relationally isomorphic mathematical model.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 2, 2013
******************************
This essay is the ninth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first eight chapters were previously published on The Rational Argumentator under the following titles:
***

The two approaches falling within this class considered thus far are (a) computational models that model the biophysical (e.g., electromagnetic, chemical, and kinetic) operation of the neurons—i.e., the physical processes instantiating their emergent functionality, whether at the scale of tissues, molecules and/or atoms, and anything in between—and (b) abstracted models, a term which designates anything that computationally models the neuron using the (sub-neuron but super-protein-complex) components themselves as the chosen model-scale (whereas the latter uses for its chosen model-scale the scale at which physical processes emergently instantiating those higher-level neuronal components exist, such as the membrane and individual proteins forming the transmembrane protein-complexes), regardless of whether each component is abstracted as a normative-electrical-component analogue (i.e., using circuit diagrams in place of biological schematics, like equating the lipid bilayer membrane with a capacitor connected to a variable battery) or mathematical models in which a relevant component or aspect of the neuron becomes a term (e.g., a variable or constant) in an equation.

It was during the process of trying to formulate different ways of mathematically (and otherwise computationally) modeling neurons or sub-neuron regions that I laid the conceptual embryo of the first new possible basis for subjective-continuity: the notion of operational isomorphism.

A New Approach to Subjective-Continuity Through Substrate Replacement

There are two other approaches to increasing the likelihood of subjective-continuity, each based on the presumption of two possible physical bases for discontinuity, that I explored during this period. Note that these approaches are unrelated to graduality, which has been the main determining factor impacting the likelihood of subjective-continuity considered thus far. The new approaches consist of designing the NRUs so as to retain the respective postulated physical bases for subjective-continuity that exist in the biological brain. Thus they are unrelated to increasing the efficacy of the gradual-replacement procedure itself, instead being related to the design requirements of functional-equivalents used to gradually replace the neurons that maintain immediate subjective-continuity.

Operational Isomorphism

Whereas functionality deals only with the emergent effects or end-product of a given entity or process, operationality deals with the procedural operations performed so as to give rise to those emergent effects. A mathematical model of a neuron might be highly functionally equivalent while failing to be operationally equivalent in most respects. Isomorphism can be considered a measure of “sameness”, but technically means a 1-to-1 correspondence between the elements of two sets (which would correspond with operational isomorphism) or between the sums or products of the elements of two sets (which would correspond with functional isomorphism, using the definition of functionality employed above). Thus, operational isomorphism is the degree with which the sub-components (be they material as in entities or procedural as in processes) of the two larger-scale components, or the operational modalities possessed by each respective collection of sub-components, are equivalent.

To what extent does the brain possess operational isomorphism? It seems to depend on the scale being considered. At the highest scale, different areas of the nervous system are classed as systems (as in functional taxonomies) or regions (as in anatomical taxonomies). At this level the separate regions (i.e., components of a shared scale) differ widely from one another in terms of operational-modality; they process information very differently from the way other components on the same scale process information. If this scale was chosen as the model-scale of our replication-approach and the preceding premise (that the physical basis for subjective-continuity is the degree of operational isomorphism between components at a given scale) is accepted, then we would in such a case have a high probability of replicating functionality, but a low probability of retaining subjective-continuity through gradual replacement. This would be true even if we used the degree of operational isomorphism between separate components as the only determining factor for subjective-continuity, and ignored concerns of graduality (e.g., the scale or rate—or scale-to-rate ratio—at which gradual substrate replacement occurs).

Contrast this to the molecular scale, where the operational modality of each component (being a given molecule) and the procedural rules determining the state-changes of components at this scale are highly isomorphic. The state-changes of a given molecule are determined by molecular and atomic forces. Thus if we use an informational-functionalist approach, choose a molecular scale for our model, and accept the same premises as the first example, we would have a high probability of both replicating functionality and retaining subjective-continuity through gradual replacement because the components (molecules) have a high degree of operational isomorphism.

Note that this is only a requirement for the sub-components instantiating the high-level neural regions/systems that embody our personalities and higher cognitive faculties such as the neocortex — i.e., we wouldn’t have to choose a molecular scale as our model scale (if it proved necessary for the reasons described above) for the whole brain, which would be very computationally intensive.

So at the atomic and molecular scale the brain possesses a high degree of operational isomorphism. On the scale of the individual protein complexes, which collectively form a given sub-neuronal component (e.g., ion channel), components still appear to possess a high degree of operational isomorphism because all state-changes are determined by the rules governing macroscale proteins and protein-complexes (i.e., biochemistry and particularly protein-protein interactions); by virtue of being of the same general constituents (amino acids), the factors determining state-changes at this level are shared by all components at this scale. The scale of individual neuronal components, however, seems to possess a comparatively lesser degree of operational isomorphism. Some ion channels are ligand-gated while others are voltage-gated. Thus, different aspects of physicality (i.e., molecular shape and voltage respectively) form the procedural-rules determining state-changes at this scale. Since there are now two different determining factors at this scale, its degree of operational isomorphism is comparatively less than the protein and protein-complex scale and the molecular scale, both of which appear to have only one governing procedural-rule set. The scale of individual neurons by contrast appears to possess a greater degree of operational isomorphism; every neuron fires according to its threshold value, and sums analog action-potential values into a binary output (i.e., neuron either fires or does not). All individual neurons operate in a highly isomorphic manner. Even though individual neurons of a given type are more operationally isomorphic in relation to each other than with a neuron of another type, all neurons regardless of type still act in a highly isomorphic manner. However, the scale of neuron-clusters and neural-networks, which operate and communicate according to spatiotemporal sequences of firing patterns (action-potential patterns), appears to possess a lesser degree of operational isomorphism compared to individual neurons, because different sequences of firing patterns will mean a different thing to two respective neural clusters or networks. Also note that at this scale the degree of functional isomorphism between components appears to be less than their degree of operational isomorphism—that is, the way each cluster or network operates is more similar in relation to each other than is their actual function (i.e., what they effectively do). And lastly, at the scale of high-level neural regions/systems, components (i.e., neural regions) differ significantly in morphology, in operationality, and in functionality; thus they appear to constitute the scale that possesses the least operational isomorphism.

I will now illustrate the concept of operational isomorphism using the physical-functionalist and the informational-functionalist NRU approaches, respectively, as examples. In terms of the physical-functionalist (i.e., prosthetic neuron) approach, both the passive (i.e., “direct”) and CPU-controlled sub-classes, respectively, are operationally isomorphic. An example of a physical-functionalist NRU that would not possess operational isomorphism is one that uses a passive-physicalist approach for the one type of component (e.g., voltage-gated ion channel) and a CPU-controlled/cyber-physicalist approach [see Part 4 of this series] for another type of component (e.g., ligand-gated ion channel)—on that scale the components act according to different technological and methodological infrastructures, exhibit different operational modalities, and thus appear to possess a low degree of operational isomorphism. Note that the concern is not the degree of operational isomorphism between the functional-replication units and their biological counterparts, but rather with the degree of operational isomorphism between the functional-replication units and other units on the same scale.

Another possibly relevant type of operational isomorphism is the degree of isomorphism between the individual sub-components or procedural-operations (i.e., “steps”) composing a given component, designated here as intra-operational isomorphism. While very similar to the degree of isomorphism for the scale immediately below, this differs from (i.e., is not equivalent to) such a designation in that the sub-components of a given larger component could be functionally isomorphic in relation to each other without being operationally isomorphic in relation to all other components on that scale. The passive sub-approach of the physical-functionalist approach would possess a greater degree of intra-operational isomorphism than would the CPU-controlled/cyber-physicalist sub-approach, because presumably each component would interact with the others (via physically embodied feedback) according to the same technological and methodological infrastructure—be it mechanical, electrical, chemical, or otherwise. The CPU-controlled sub-approach by contrast would possess a lesser degree of intra-operational-isomorphism, because the sensors, CPU, and the electric or electromechanical systems, respectively (the three main sub-components for each singular neuronal component—e.g., an artificial ion channel), operate according to different technological and methodological infrastructures and thus exhibit alternate operational modalities in relation to eachother.

In regard to the informational-functionalist approach, an NRU model that would be operationally isomorphic is one wherein, regardless of the scale used, the type of approach used to model a given component on that scale is as isomorphic with the ones used to model other components on the same scale as is possible. For example, if one uses a mathematical model to simulate spiking regions of the dendritic spine, then one shouldn’t use a non-mathematical (e.g., strict computational-logic) approach to model non-spiking regions of the dendritic spine. Since the number of variations to the informational-functionalist approach is greater than could exist for the physical-functionalist approach, there are more gradations to the degree of operational isomorphism. Using the exact same branches of mathematics to mathematically model the two respective components would incur a greater degree of operational isomorphism than if we used alternate mathematical techniques from different disciplines to model them. Likewise, if we used different computational approaches to model the respective components, then we would have a lesser degree of operational isomorphism. If we emulated some components while merely simulating others, we would have a lesser degree of operational isomorphism than if both were either strictly simulatory or strictly emulatory.

If this premise proves true, it suggests that when picking the scale of our replication-approach (be it physical-functionalist or informational-functionalist), we choose a scale that exhibits operational isomorphism—for example, the molecular scale rather than the scale of high-level neural-regions, and that we don’t use widely dissimilar types of modeling techniques to model one component (e.g., a molecular system) than we do for another component on the same scale.

Note that unlike operational-continuity, the degree of operational isomorphism was not an explicit concept or potential physical basis for subjective-continuity at the time of my working on immortality (i.e., this concept wasn’t yet fully fleshed out in 2010), but rather was formulated in response to going over my notes from this period so as to distill the broad developmental gestalt of my project; though it appears to be somewhat inherent (i.e., appears to be hinted at), it hasn’t been explicitized until relatively recently.

The next chapter describes the rest of my work on technological approaches to techno-immortality in 2010, focusing on a second new approach to subjective-continuity through a gradual-substrate-replacement procedure, and concluding with an overview of the ways my project differs from the other techno-immortalist projects.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 25, 2013
******************************
This essay is the eighth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first seven chapters were previously published on The Rational Argumentator under the following titles:
***

By 2009 I felt the major classes of physicalist-functionalist replication approaches to be largely developed, producing now only potential minor variations in approach and procedure. These developments consisted of contingency plans in the case that some aspect of neuronal operation couldn’t be replicated with alternate, non-biological physical systems and processes, based around the goal of maintaining those biological (or otherwise organic) systems and processes artificially and of integrating them with the processes that could be reproduced artificially.

2009 also saw further developments in the computational approach, where I conceptualized a new sub-division in the larger class of the informational-functionalist (i.e., computational, which encompasses both simulation and emulation) replication approach, which is detailed in the next chapter.

Developments in the Physicalist Approach

During this time I explored mainly varieties of the cybernetic-physical functionalist approach. This involved the use of replicatory units that preserve certain biological aspects of the neuron while replacing certain others with functionalist replacements, and other NRUs that preserved alternate biological aspects of the neuron while replacing different aspects with functional replacements. The reasoning behind this approach was twofold. The first was that there was a chance, no matter how small, that we might fail to sufficiently replicate some relevant aspect(s) of the neuron either computationally or physically by failing to understand the underlying principles of that particular sub-process/aspect. The second was to have an approach that would work in the event that there was some material aspect that couldn’t be sufficiently replicated via non-biological physically embodied systems (i.e., the normative physical-functionalist approach).

However, these varieties were conceived of in case we couldn’t replicate certain components successfully (i.e., without functional divergence). The chances of preserving subjective-continuity in such circumstances are increased by the number of varieties we have for this class of model (i.e., different arrangements of mechanical replacement components and biological components), because we don’t know which we would fail to functionally replicate.

This class of physical-functionalist model can be usefully considered as electromechanical-biological hybrids, wherein the receptors (i.e., transporter proteins) on the post-synaptic membrane are integrated with the artificial membrane and in coexistence with artificial ion-channels, or wherein the biological membrane is retained while the receptor and ion-channels are replaced with functional equivalents instead. The biological components would be extracted from the existing biological neurons and reintegrated with the artificial membrane. Otherwise they would have to be synthesized via electromechanical systems, such as, but not limited to, the use of chemical stores of amino-acids released in specific sequences to facilitate in vivo protein folding and synthesis, which would then be transported to and integrated with the artificial membrane. This is better than providing stores of pre-synthesized proteins, due to more complexities in storing synthesized proteins without decay or functional degradation over storage-time, and in restoring them from their “stored”, inactive state to a functionally-active state when they were ready for use.

During this time I also explored the possibility of using the neuron’s existing protein-synthesis systems to facilitate the construction and gradual integration of the artificial sections with the existing lipid bilayer membrane. Work in synthetic biology allows us to use viral gene vectors to replace a given cell’s constituent genome—and consequently allowing us to make it manufacture various non-organic substances in replacement of the substances created via its normative protein-synthesis. We could use such techniques to replace the existing protein-synthesis instructions with ones that manufacture and integrate the molecular materials constituting the artificial membrane sections and artificial ion-channels and ion-pumps. Indeed, it may even be a functional necessity to gradually replace a given neuron’s protein-synthesis machinery with protein-synthesis-based machinery for the replacement, integration and maintenance of the non-biological sections’ material, because otherwise those parts of the neuron would still be trying to rebuild each section of lipid bilayer membrane we iteratively remove and replace. This could be problematic, and so for successful gradual replacement of single neurons, a means of gradually switching off and/or replacing portions of the cell’s protein-synthesis systems may be required.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Neuronal “Scanning” and NRU Integration – Article by Franco Cortese

Neuronal “Scanning” and NRU Integration – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 23, 2013
******************************
This essay is the seventh chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first six chapters were previously published on The Rational Argumentator under the following titles:
***

I was planning on using the NEMS already conceptually developed by Robert Freitas for nanosurgery applications (to be supplemented by the use of MEMS if the technological infrastructure was unavailable at the time) to take in vivo recordings of the salient neural metrics and properties needing to be replicated. One novel approach was to design the units with elongated, worm-like bodies, disposing the computational and electromechanical apparatus within the elongated body of the unit. This sacrifices width for length so as to allow the units to fit inside the extra-cellular space between neurons and glial cells as a postulated solution to a lack of sufficient miniaturization. Moreover, if a unit is too large to be used in this way, extending its length by the same proportion would allow it to then operate in the extracellular space, provided that its means of data-measurement itself weren’t so large as to fail to fit inside the extracellular space (the span of ECF between two adjacent neurons for much of the brain is around 200 Angstroms).

I was planning on using the chemical and electrical sensing methodologies already in development for nanosurgery as the technological and methodological infrastructure for the neuronal data-measurement methodology. However, I also explored my own conceptual approaches to data-measurement. This consisted of detecting variation of morphological features in particular, as the schemes for electrical and chemical sensing already extant seemed either sufficiently developed or to be receiving sufficient developmental support and/or funding. One was the use of laser-scanning or more generally radiography (i.e., sonar) to measure and record morphological data. Another was a device that uses a 2D array of depressible members (e.g., solid members attached to a spring or ratchet assembly, which is operatively connected to a means of detecting how much each individual member is depressed—such as but not limited to piezoelectric crystals that produce electricity in response and proportion to applied mechanical strain). The device would be run along the neuronal membrane and the topology of the membrane would be subsequently recorded by the pattern of depression recordings, which are then integrated to provide a topographic map of the neuron (e.g., relative location of integral membrane components to determine morphology—and magnitude of depression to determine emergent topology). This approach could also potentially be used to identify the integral membrane proteins, rather than using electrical or chemical sensing techniques, if the topologies of the respective proteins are sufficiently different as to be detectable by the unit (determined by its degree of precision, which typically is a function of its degree of miniaturization).

The constructional and data-measurement units would also rely on the technological and methodological infrastructure for organization and locomotion that would be used in normative nanosurgery. I conceptually explored such techniques as the use of a propeller, the use of pressure-based methods (i.e., a stream of water acting as jet exhaust would in a rocket), the use of artificial cilia, and the use of tracks that the unit attaches to so as to be moved electromechanically, which decreases computational intensiveness – a measure of required computation per unit time – rather than having a unit compute its relative location so as to perform obstacle-avoidance and not, say, damage in-place biological neurons. Obstacle-avoidance and related concerns are instead negated through the use of tracks that limit the unit’s degrees of freedom—thus preventing it from having to incorporate computational techniques of obstacle-avoidance (and their entailed sensing apparatus). This also decreases the necessary precision (and thus, presumably, the required degree of miniaturization) of the means of locomotion, which would need to be much greater if the unit were to perform real-time obstacle avoidance. Such tracks would be constructed in iterative fashion. The constructional system would analyze the space in front of it to determine if the space was occupied by a neuron terminal or soma, and extrude the tracks iteratively (e.g., add a segment in spaces where it detects the absence of biological material). It would then move along the newly extruded track, progressively extending it through the spaces between neurons as it moves forward.

Non-Distortional in vivo Brain “Scanning”

A novel avenue of enquiry that occurred during this period involves counteracting or taking into account the distortions caused by the data-measurement units on the elements or properties they are measuring and subsequently applying such corrections to the recording data. A unit changes the local environment that it is supposed to be measuring and recording, which becomes problematic. My solution was to test which operations performed by the units have the potential to distort relevant attributes of the neuron or its environment and to build units that compensate for it either physically or computationally.

If we reduce how a recording unit’s operation distorts neuronal behavior into a list of mathematical rules, we can take the recordings and apply mathematical techniques to eliminate or “cancel out” those distortions post-measurement, thus arriving at what would have been the correct data. This approach would work only if the distortions are affecting the recorded data (i.e., changing it in predictable ways), and not if they are affecting the unit’s ability to actually access, measure, or resolve such data.

The second approach applies the method underlying the first approach to the physical environment of the neuron. A unit senses and records the constituents of the area of space immediately adjacent to its edges and mathematically models that “layer”; i.e., if it is meant to detect ionic solutions (in the case of ECF or ICF), then it would measure their concentration and subsequently model ionic diffusion for that layer. It then moves forward, encountering another adjacent “layer” and integrating it with its extant model. By being able to sense iteratively what is immediately adjacent to it, it can model the space it occupies as it travels through that space. It then uses electric or chemical stores to manipulate the electrical and chemical properties of the environment immediately adjacent to its surface, so as to produce the emergent effects of that model (i.e., the properties of the edges of that model and how such properties causally affect/impact adjacent sections of the environment), thus producing the emergent effects that would have been present if the NRU-construction/integration system or data-measuring system hadn’t occupied that space.

The third postulated solution was the use of a grid comprised of a series of hollow recesses placed in front of the sensing/measuring apparatus. The grid is impressed upon the surface of the membrane. Each compartment isolates a given section of the neuronal membrane from the rest. The constituents of each compartment are measured and recorded, most probably via uptake of its constituents and transport to a suitable measuring apparatus. A simple indexing system can keep track of which constituents came from which grid (and thus which region of the membrane they came from). The unit has a chemical store operatively connected to the means of locomotion used to transport the isolated membrane-constituents to the measuring/sensing apparatus. After a given compartment’s constituents are measured and recorded, the system then marks its constituents (determined by measurement and already stored as recordings by this point of the process), takes an equivalent molecule or compound from a chemical inventory, and replaces the substance it removed for measurement with the equivalent substance from its chemical inventory. Once this is accomplished for a given section of membrane, the grid then moves forward, farther into the membrane, leaving the replacement molecules/compounds from the biochemical inventory in the same respective spots as their original counterparts. It does this iteratively, making its way through a neuron and out the other side. This approach is the most speculative, and thus the least likely to be used. It would likely require the use of NEMS, rather than MEMS, as a necessary technological infrastructure, if the approach were to avoid becoming economically prohibitive, because in order for the compartment-constituents to be replaceable after measurement via chemical store, they need to be simple molecules and compounds rather than sections of emergent protein or tissue, which are comparatively harder to artificially synthesize and store in working order.

***

In the next chapter I describe the work done throughout late 2009 on biological/non-biological NRU hybrids, and in early 2010 on one of two new approaches to retaining subjective-continuity through a gradual replacement procedure, both of which are unrelated to concerns of graduality or sufficient functional equivalence between the biological original and the artificial replication-unit.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.