Browsed by
Tag: mind uploading

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer'” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Mr. Stolyarov refutes Adams’s equation of transhumanism with destructive mind uploading and explains that advocacy of mind uploading is neither a necessary nor a sufficient component of transhumanism.

References
– “Transhumanism and Mind Uploading Are Not the Same” – Essay by G. Stolyarov II
– “Transhumanism debunked: Why drinking the Kurzweil Kool-Aid will only make you dead, not immortal” – Mike Adams – NaturalNews.com – June 25, 2013
SENS Research Foundation
– “Nanomedicine” – Wikipedia
– “Transhumanism: Towards a Futurist Philosophy” – Essay by Max More
2045 Initiative Website
Bebionic Website
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II
– “Immortality: Bio or Techno?” – Essay by Franco Cortese

Refutation of RockingMrE’s “Transhuman Megalomania” Video – Essay and Video by G. Stolyarov II

Refutation of RockingMrE’s “Transhuman Megalomania” Video – Essay and Video by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
August 11, 2013
******************************

Video

Essay

As a libertarian transhumanist, I was rather baffled to see the “Transhuman Megalomania” video on the Rocking Philosophy YouTube channel of one, RockingMrE. Rocking Philosophy appears to have much in common with my rational individualist outlook in terms of general principles, though in not in terms of some specific positions – such as RockingMrE’s opposition to LGBT rights. The channel’s description states that “Above all Rocking Philosophy promotes individualism and a culture free of coercion. Views are based on the non-aggression principle, realism, and a respect for rationality.” I agree with all of these basic principles – hence my bewilderment that RockingMrE would attempt to assail transhumanism in extremely harsh terms – going so far as to call transhumanism “a mad delusion” and a “threat looming over humanity” – rather than embrace or promote it. Such characterizations could not be more mistaken.

In essays and videos such as “Liberty Through Long Life” and “Libertarian Life-Extension Reforms”, I explain that  libertarianism and transhumanism are natural corollaries and would reinforce one another through a virtuous cycle of positive feedback. If people are indeed free as individuals to innovate and to enter the economic and societal arrangements that they consider most beneficial, what do you think would happen to the rate of technological progress? If you think that the result would not be a skyrocketing acceleration of new inventions and their applications to all areas of life, would that position not presuppose a view that freedom would somehow breed stagnation or lead to sub-optimal utilization of human creative faculties? In other words, would not the view of libertarianism as being opposed to transhumanism essentially be a view that liberty would hold people back from transcending the limitations involuntarily imposed on them by the circumstances in which they and their ancestors found themselves? How could such a view be reconcilable with the whole point of liberty, which is to expand and – as the term suggests – liberate human potential, instead of constricting it?

RockingMrE criticizes transhumanists for attempting to reshape the “natural” condition of humanity and to render such a condition obsolete. Yet this overlooks the essence of human behavior over the past twelve millennia at least. Through the use of technology – from rudimentary hunting and farming implements to airplanes, computers, scientific medicine, and spacecraft – we have already greatly departed from the nasty, brutish, and short “natural” lives of our Paleolithic ancestors. Furthermore, RockingMrE falls prey to the naturalistic fallacy – that the “natural”, defined arbitrarily as that which has not been shaped by deliberate human influence, is somehow optimal or good, when in fact we know that “nature”, apart from human influence, is callously indifferent at best, and viciously cruel in most circumstances, having  brought about the immense suffering and demise of most humans who have ever lived and the extinction of 99.9% of species that have ever existed, the vast majority of those occurring without any human intervention.

RockingMrE characterizes transhumanism as a so-called “evil” that presents itself as a “morally relativist and benign force, where any action can be justified for the greater good.” I see neither moral relativism nor any greater-good justifications in transhumanism. Transhumanism can be justified from an entirely individualistic standpoint. Furthermore, it can be justified from the morally objective value of each individual’s life and the continuation of such life. I, as an individual, do not wish to die and wish to accomplish more than my current  bodily and mental faculties, as well as the current limitations of human societies and the present state of technology, would allow me to accomplish today. I exist objectively and I recognize that my existence requires objective physical prerequisites, such as the continuation of the functions of my biological body and biological mind. Therefore, I support advances in medicine, genetic engineering, nanotechnology, computing, education, transportation, and human settlement which would enable these limitations to be progressively lifted and would improve my chances of seeing a much remoter future than my current rate of biological senescence would allow. As an ethically principled individual, I recognize that all beings with the same essential faculties that I have, ought also to have the right to pursue these aspirations in an entirely voluntary, non-coercive manner. In other words, individualist transhumanism would indeed lead to the good of all because its principles and achievements would be universalizable – but the always vaguely defined notion of the “greater good” does not serve as the justification for transhumanism; the good of every individual does. The good of every individual is equivalent to the good for all individuals, which is the only defensible notion of a “greater good”.

RockingMrE states that some of the technologies advocated by transhumanists are “less dangerous than others, and some are even useful.” Interestingly enough, he includes cryopreservation in the category of less dangerous technologies, because a cryopreserved human who is revived will still have the same attributes he or she had prior to preservation. Life extension is the most fundamental transhumanist aim, the one that makes all the other aims feasible. As such, I am quite surprised that RockingMrE did not devote far more time in his video to technologies of radical life extension. Cryonics is one such approach, which attempts to place a physically damaged organism in stasis after that organism reaches clinical death by today’s definition. In the future, what is today considered death may become reversible, giving that individual another chance at life. There are other life-extension approaches, however, which would not even require stasis. Aubrey de Grey’s SENS approach involves the periodic repair of seven kinds of damage that contribute to senescence and eventual death. A person who is relatively healthy when he begins to undergo the therapies envisioned by SENS might not ever need to get to the stage where cryopreservation would be the only possible way of saving that person. What does RockingMrE think of that kind of technology? What about the integration of nanotechnology into human bodily repair systems, to allow for ongoing maintenance of cells and tissues? If a person still looks, talks, and thinks like many humans do today, but lives indefinitely and remains indefinitely young, would this be acceptable to RockingMrE, or would it be a “megalomaniacal” and “evil” violation of human nature?  Considering that indefinite life extension is the core of transhumanism, the short shrift given to it in RockingMrE’s video underscores the severe deficiencies of his critique.

RockingMrE further supports megascale engineering – including the creation of giant spacecraft and space elevators – as a type of technology that “would enhance, rather than alter, what it means to be human.” He also clearly states his view that the Internet enhances our lives and allows the communication of ideas in a manner that would never have been possible previously. We agree here. I wonder, though, if a strict boundary between enhancing and altering can be drawn. Our human experience today differs radically from that of our Paleolithic and Neolithic ancestors – in terms of how much of the world we are able to see, what information is available to us, the patterns in which we lead our lives, and most especially the lengths of those lives. Many of our distant ancestors would probably consider us magicians or demigods, rather than the humans with whom they were familiar. If we are able to create giant structures on Earth and in space, this would surely broaden our range of possible experiences, as well as the resources of the universe that are accessible to us. A multiplanetary species, with the possibility of easy and fast travel among places of habitation, would be fundamentally different from today’s humanity in terms of possible lifestyles and protections from extinction, even while retaining some of the same biological and intellectual characteristics.   As for the Internet, there are already studies suggesting that the abundance of information available online is altering the structure of many humans’ thinking and interactions with that information, as well as with one another. Is this any less human, or just human with a different flavor? If it is just human with a different flavor, might not the other transhumanist technologies criticized by RockingMrE also be characterized this way?

RockingMrE does not even see any significant issues with virtual reality and mind uploading, aside from asking the legitimate question of whether a copy of a person’s mind is still that person. This is a question which has been considerably explored and debated in transhumanist circles, and there is some disagreement as to the answer. My own position, expressed in my essay “Transhumanism and Mind Uploading Are Not the Same”,  is that a copy would indeed not be the same as the individual, but a process of gradual replacement of biological neurons with artificial neurons might preserve a person’s “I-ness” as long as certain rather challenging prerequisites could be met. RockingMrE’s skepticism in this area is understandable, but it does not constitute an argument against transhumanism at all, since transhumanism does not require advocacy of mind uploading generally, or of any particular approach to mind uploading. Moreover, RockingMrE does not see virtual copies of minds as posing a moral problem. In his view, this is because “a program is not an organic life.” We can agree that there is no moral problem posed by non-destructively creating virtual copies of biological minds.

Still, in light of all of the technologies that RockingMrE does not consider to be highly concerning, why in the world does he characterize transhumanism so harshly, after spending the first 40% of his video essentially clarifying that he does not take issue with the actualization of many of the common goals of transhumanists? Perhaps it is because he misunderstands what transhumanism is all about. For the technologies that RockingMrE finds more alarming, he appears to think that they would allow “a level of social engineering that totalitarians could only dream of during the 20th century.” No transhumanist I know of would advocate such centrally planned social engineering. RockingMrE aims his critique at technologies that have “the potential to create human life” – such as gene therapy, which can, in RockingMrE’s words, “dictate the characteristics of life to such an extent that those making the decisions have complete control over how this forms”. This argument appears to presuppose a form of genetic determinism and a denial of human free will, even though RockingMrE would affirm his view that free will exists. Suppose it were possible to make a person five centimeters taller through genetic engineering. Does that have any bearing over how that person will actually choose to lead his life? Perhaps he could become a better basketball player than otherwise, but it is just as possible that basketball would not interest him at all, and he would rather be a taller-than-average chemist, accountant, or painter. This choice would still be up to him, and not the doctors who altered his genome or the parents who paid for the alteration. Alteration of any genes that might influence the brain would have even less of a predetermined or even determinable impact. If parents who are influenced by the faulty view of genetic determinism try everything in their power to alter their child’s genome in order to create a super-genius (in their view), who is to say that this child would necessarily act out the parents’ ideal? A true super-genius with a will of his own is probably the most autonomous possible human; he or she would develop a set of tastes, talents, and aspirations that nobody could anticipate or manage, and would run circles around any design to control or limit his or her life. What genetic engineering could achieve, though, is to remove the obstacles to an individual’s self-determination by eliminating genetic sub-optimalities: diseases, weaknesses of organs, and inhibitions to clear self-directed brain function. This is no qualitatively different from helping a child develop intellectually by taking the child out of a violent slum and putting him or her into a peaceful, nurturing, and prosperous setting.

RockingMrE fears that gene therapy would allow “ideologues to suppress certain human characteristics”. While this cannot be ruled out, any such development would be a political problem, not a technological one, and could be addressed only through reforms protecting individual freedom, not through abolition of any techniques of genetic engineering. The vicious eugenics movement of the early 20th century, to which RockingMrE wrongly compares transhumanism, attempted to suppress the characteristics of whole populations of humans using very primitive technologies by today’s standards. The solution to such misguided ideological movements is to maximize the scope for individual liberty, so as to allow the characteristics that individuals consider good or neutral to be preserved and for individual wishes to be protected by law, despite what some eugenicist somewhere might think.

Transhumanism is about giving each person the power to control his or her own destiny, including his or her genotype; transhumanism is certainly not about ceding that control to others. Even a child who was genetically engineered prior to birth would, with sufficient technological advances, be able to choose to alter his or her genotype upon becoming an adult. Just as parental upbringing can influence a child but does not determine a person’s entire future, so can genetic-engineering decisions by parents be routed around, overcome, ignored, or utilized by the child in a way far different from the parents’ intentions. Furthermore, because parents differ considerably in their views of what the best traits would be, engineering at the wishes of parents  would in no way diminish the diversity of human characteristics and would, on the contrary, enhance such diversity by introducing new mixes of traits in addition to those already extant. This is why it is unfounded to fear, as RockingMrE does, that a transhumanist society which embraces genetic engineering would turn into the society of the 1997 film Gattaca, where the non-engineered humans were excluded from non-menial work. Just as today there is no one hierarchy of genotypes and phenotypes, neither would there be such a hierarchy in a society where genetic engineering is practiced. An even greater diversity of people would mean that an even greater diversity of opportunities would be open to all. Indeed, even Gattaca could be seen as a refutation of RockingMrE’s feared scenario that genetic modification would render un-modified humans unable to compete. The protagonist in Gattaca was able to overcome the prejudices of his society through willpower and ingenuity, which would remain open to all. While the society of Gattaca relied on coercion to restrict un-modified individuals from competing, a libertarian transhumanist society would have no such restrictions and would allow individuals to rise on the basis of merit alone, rather than on the basis of genetics.

RockingMrE further expresses concern that the unintended consequences of genetic manipulation would result in viruses that reproduce out of control and “infect” humans who were not the intended targets of genetic engineering. This is not a philosophical argument against transhumanism. If such a possibility even exists (and I do not know that it does, as I am not a biologist), it could be mitigated or eliminated through careful controls in the laboratories and clinics where genetic engineering is performed. Certainly, the existence of such a possibility would not justify banning genetic manipulation, since a ban does not mean that the practice being banned goes away. Under a ban, genetic engineering would continue on the black market, where there would be far fewer safeguards in place against unintended negative consequences. It is much safer for technological innovation to proceed in the open, under a legal system that respects liberty and progress while ensuring that the rights of all are protected. Certainly, it would be justified for the legal system to protect the rights of people who do not wish to undergo certain medical treatments; such people should neither be forced into those treatments, nor have the side effects of those treatments, when they are performed on others, affect their own biology. But libertarian transhumanists would certainly agree with that point of view and would hold it consistently with regard to any technology that could conceivably impose negative external effects on non-consenting parties.

RockingMrE thinks that “it is essential that the creation and destruction of life be protected by a code of morality that respects and recognizes natural law – natural law being values derived from nature.” He describes one tier of this natural system as comprised of relationships of trade, “where all individuals have unalienable rights derived from natural action, but free of coercion and the initiation of force, voluntarily associating with one another for mutual gain.” He then says that “only this sort of philosophy can truly prevent nihilists from justifying their evil intentions to play God and […] destroy or alienate any individual that doesn’t adhere to a rigid set of socially engineered parameters.”  The latter statement is a severe misrepresentation of the aims of transhumanists, who do not support centrally planned social engineering and who are certainly not nihilists. Indeed, transhumanist technological progress is the very outcome of voluntary individual association that is free from coercion and the initiation of force. I wonder whether the “fierce defense” envisioned by RockingMrE would involve the initiation of force against innovators who attempt to improve the human genome in order to cure certain diseases, enhance certain human faculties, and lengthen the human lifespan. It is not clear whether RockingMrE advocates such coercion, but if he does, then his opposition to the emergence of these technologies would be inimical to his own stated libertarian philosophy. In other words, his conclusions are completely incompatible with his premises.

Toward the end of his video, RockingMrE uses the example of three-person in vitro fertilization (IVF) as an illustration “of how far down the road of transhumanism we are”. What, dare I ask, is wrong with three-person IVF? RockingMrE believes that it is a contributor to “gradually destroying the natural definition of parenting” – yet parenting is a set of actions to raise a child, not a method of originating that child. If RockingMrE has any problems with children who are brought into this world using three-person IVF, then what about children who are adopted and raised by parents who had no part in their conception? Is that not even more removed from parents who contributed at least some of their genetic material? Furthermore, IVF has been available in some form since the birth of Louise Brown in 1978 – 35 years ago. Since then, approximately 5 million people have been created using IVF. Are they any less human than the rest of us? Have we, as a species, lost some fraction of our humanity as a result? Surely not! And if similar consequences to what has already happened are what RockingMrE fears, then I submit that there is no basis for fear at all. New techniques for creating life and enhancing human potential may not be in line with what RockingMrE considers “natural”, but perhaps his view equates the “unnatural” to the “unfamiliar to RockingMrE”. But he does not have to personally embrace any method of genetic engineering or medically assisted creation of life; he is free to abstain from such techniques himself. What he ought to do, though, as a self-professed libertarian and individualist, is to allow the rest of us, as individuals, the same prerogative to choose to use or to abstain from using these technologies as they become available. The shape that the resulting future takes, as long as it is based on these freedom-respecting principles, is not for RockingMrE to decide or limit.

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Transhumanism and Mind Uploading Are Not the Same – Article by G. Stolyarov II

Transhumanism and Mind Uploading Are Not the Same – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 10, 2013
******************************

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer’” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Adams goes so far as calling transhumanism a “death cult much like the infamous Heaven’s Gate cult led by Marshal Applewhite.”

I will not devote this essay to refuting any of Adams’s arguments against destructive mind uploading, because no serious transhumanist thinker of whom I am aware endorses the kind of procedure Adams uses as a straw man. For anyone who wishes to continue existing as an individual, uploading the contents of the mind to a computer and then killing the body is perhaps the most bizarrely counterproductive possible activity, short of old-fashioned suicide. Instead, Adams’s article – all the misrepresentations aside – offers the opportunity to make important distinctions of value to transhumanists.

First, having a positive view of mind uploading is neither necessary nor sufficient for being a transhumanist. Mind uploading has been posited as one of several routes toward indefinite human life extension. Other routes include the periodic repair of the existing biological organism (as outlined in Aubrey de Grey’s SENS project or as entailed in the concept of nanomedicine) and the augmentation of the biological organism with non-biological components (Ray Kurzweil’s actual view, as opposed to the absurd positions Adams attributes to him). Transhumanism, as a philosophy and a movement, embraces the lifting of the present limitations upon the human condition – limitations that arise out of the failures of human biology and unaltered physical nature. Max More, in “Transhumanism: Towards a Futurist Philosophy”, writes that “Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies such as neuroscience and neuropharmacology, life extension, nanotechnology, artificial ultraintelligence, and space habitation, combined with a rational philosophy and value system.” That Adams would take this immensity of interrelated concepts, techniques, and aspirations and equate it to destructive mind uploading is, plainly put, mind-boggling. There is ample room in transhumanism for a variety of approaches toward lifting the limitations of the human condition. Some of these approaches will be more successful than others, and no one approach is obligatory for those wishing to consider themselves transhumanists.

Moreover, Adams greatly misconstrues the positions of those transhumanists who do support mind uploading. For most such transhumanists, a digital existence is not seen as superior to their current biological existences, but as rather a necessary recourse if or when it becomes impossible to continue maintaining a biological existence. Dmitry Itskov’s 2045 Initiative is perhaps the most prominent example of the pursuit of mind uploading today. The aim of the initiative is to achieve cybernetic immortality in a stepwise fashion, through the creation of a sequence of avatars that gives the biological human an increasing amount of control over non-biological components. Avatar B, planned for circa 2020-2025, would involve a human brain controlling an artificial body. If successful, this avatar would prolong the existence of the biological brain when other components of the biological body have become too irreversibly damaged to support it. Avatar C, planned for circa 2030-2035, would involve the transfer of a human mind from a biological to a cybernetic brain, after the biological brain is no longer able to support life processes. There is no destruction intended in the 2045 Avatar Project Milestones, only preservation of some manner of intelligent functioning of a person whom the status quo would instead relegate to becoming food for worms. The choice between decomposition and any kind of avatar is a no-brainer (well, a brainer actually, for those who choose the latter).

Is Itskov’s path toward immortality the best one? I personally prefer SENS, combined with nanomedicine and piecewise artificial augmentations of the sort that are already beginning to occur (witness the amazing bebionic3 prosthetic hand). Itskov’s approach appears to assume that the technology for transferring the human mind to an entirely non-biological body will become available sooner than the technology for incrementally maintaining and fortifying the biological body to enable its indefinite continuation. My estimation is the reverse. Before scientists will be able to reverse-engineer not just the outward functions of a human brain but also its immensely complex and intricate internal structure, we will have within our grasp the ability to conquer an ever greater number of perils that befall the biological body and to repair the body using both biological and non-biological components.

The biggest hurdle for mind uploading to overcome is one that does not arise with the approach of maintaining the existing body and incrementally replacing defective components. This hurdle is the preservation of the individual’s unique and irreplaceable vantage point upon the world – his or her direct sense of being that person and no other. I term this direct vantage point an individual’s “I-ness”.  Franco Cortese, in his immensely rigorous and detailed conceptual writings on the subject, calls it “subjective-continuity” and devotes his attention to techniques that could achieve gradual replacement of biological neurons with artificial neurons in such a way that there is never a temporal or operational disconnect between the biological mind and its later cybernetic instantiation. Could the project of mind uploading pursue directions that would achieve the preservation of the “I-ness” of the biological person? I think this may be possible, but only if the resulting cybernetic mind is structurally analogous to the biological mind and, furthermore, maintains the temporal continuity of processes exhibited by an analog system, as opposed to a digital system’s discrete “on-off” states and the inability to perform multiple exactly simultaneous operations. Furthermore, only by developing the gradual-replacement approaches explored by Cortese could this prospect of continuing the same subjective experience (as opposed to simply creating a copy of the individual) be realized. But Adams, in his screed against mind uploading, seems to ignore all of these distinctions and explorations. Indeed, he appears to be oblivious of the fact that, yes, transhumanists have thought quite a bit about the philosophical questions involved in mind uploading. He seems to think that in mind uploading, you simply “copy the brain and paste it somewhere else” and hope that “somehow magically that other thing becomes ‘you.’” Again, no serious proponent of mind uploading – and, more generally, no serious thinker who has considered the subject – would hold this misconception.

Adams is wrong on a still further level, though. Not only is he wrong to equate transhumanism with mind uploading; not only is he wrong to declare all mind uploading to be destructive – he is also wrong to condemn the type of procedure that would simply make a non-destructive copy of an individual. This type of “backup” creation has indeed been advocated by transhumanists such as Ray Kurzweil. While a pure copy of one’s mind or its contents would not transfer one’s “I-ness” to a digital substrate and would not enable one to continue experiencing existence after a fatal illness or accident, it could definitely help an individual regain his memories in the event of brain damage or amnesia. Furthermore, if the biological individual were to irreversibly perish, such a copy would at least preserve vital information about the biological individual for the benefit of others. Furthermore, it could enable the biological individual’s influence upon the world to be more powerfully actualized by a copy that considers itself to have the biological individual’s memories, background, knowledge, and personality.  If we had with us today copies of the minds of Archimedes, Benjamin Franklin, and Nikola Tesla, we would certainly all benefit greatly from continued outpourings of technological and philosophical innovation.  The original geniuses would not know or care about this, since they would still be dead, but we, in our interactions with minds very much like theirs, would be immensely better off than we are with only their writings and past inventions at our disposal.

Yes, destructive digital copying of a mind would be a bafflingly absurd and morally troubling undertaking – but recognition of this is neither a criticism of transhumanism nor of any genuinely promising projects of mind uploading. Instead, it is simply a matter of common sense, a quality which Mike Adams would do well to acquire.

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

How Can I Live Forever?: What Does and Does Not Preserve the Self – Video by G. Stolyarov II

How Can I Live Forever?: What Does and Does Not Preserve the Self – Video by G. Stolyarov II

When we seek indefinite life, what is it that we are fundamentally seeking to preserve? Mr. Stolyarov discusses what is necessary for the preservation of “I-ness” – an individual’s direct vantage point: the thoughts and sensations of a person as that person experiences them directly.

Once you are finished with this video, you can take a quiz and earn the “I-ness” Awareness Open Badge.

Reference

– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the sixth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first five chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“, and “Wireless Synapses, Artificial Plasticity, and Neuromodulation“.
***
Electromagnetic Theory of Mind
***

One line of thought I explored during this period of my conceptual work on life extension was concerned with whether it was not the material constituents of the brain manifesting consciousness, but rather the emergent electric or electromagnetic fields generated by the concerted operation of those material constituents, that instantiates mind. This work sprang from reading literature on Karl Pribram’s holonomic-brain theory, in which he developed a “holographic” theory of brain function. A hologram can be cut in half, and, if illuminated, each piece will still retain the whole image, albeit at a loss of resolution. This is due to informational redundancy in the recording procedure (i.e., because it records phase and amplitude, as opposed to just amplitude in normal photography). Pribram’s theory sought to explain the results of experiments in which a patient who had up to half his brain removed and nonetheless retained levels of memory and intelligence comparable to what he possessed prior to the procedure, and to explain the similar results of experiments in which the brain is sectioned and the relative organization of these sections is rearranged without the drastic loss in memory or functionality one would anticipate. These experiments appear to show a holonomic principle at work in the brain. I immediately saw the relation to gradual uploading, particularly the brain’s ability to take over the function of parts recently damaged or destroyed beyond repair. I also saw the emergent electric fields produced by the brain as much better candidates for exhibiting the material properties needed for such holonomic attributes. For one, electromagnetic fields (if considered as waves rather than particles) are continuous, rather than modular and discrete as in the case of atoms.

The electric-field theory of mind also seemed to provide a hypothetical explanatory model for the existence of subjective-continuity through gradual replacement. (Remember that the existence and successful implementation of subjective-continuity is validated by our subjective sense of continuity through normative metabolic replacement of the molecular constituents of our biological neurons— a.k.a. molecular turnover). If the emergent electric or electromagnetic fields of the brain are indeed holonomic (i.e., possess the attribute of holographic redundancy), then a potential explanatory model to account for why the loss of a constituent module (i.e., neuron, neuron cluster, neural network, etc.) fails to cause subjective-discontinuity is provided. Namely, subjective-continuity is retained because the loss of a constituent part doesn’t negate the emergent information (the big picture), but only eliminates a fraction of its original resolution. This looked like empirical support for the claim that it is the electric fields, rather than the material constituents of the brain, that facilitate subjective-continuity.

Another, more speculative aspect of this theory (i.e., not supported by empirical research or literature) involved the hypothesis that the increased interaction among electric fields in the brain (i.e., interference via wave superposition, the result of which is determined by both phase and amplitude) might provide a physical basis for the holographic/holonomic property of “informational redundancy” as well, if it was found that electric fields do not already possess or retain the holographic-redundancy attributes mentioned (i.e., interference via wave superposition, which involves a combination of both phase and amplitude).

A local electromagnetic field is produced by the electrochemical activity of the neuron. This field then undergoes interference with other local fields; and at each point up the scale, we have more fields interfering and combining. The level of disorder makes the claim that salient computation is occurring here dubious, due to the lack of precision and high level of variability which provides an ample basis for dysfunction (including increased noise, lack of a stable — i.e., static or material — means of information storage, and poor signal transduction or at least a high decay rate for signal propagation). However, the fact that they are interfering at every scale means that the local electric fields contain not only information encoding the operational states and functional behavior of the neuron it originated from, but also information encoding the operational states of other neurons by interacting, interfering, and combining with the electric fields produced by those other neurons (by electromagnetic fields interfering and combining in both amplitude and phase, as in holography, and containing information about other neurons by having interfered with their corresponding EM fields; thus if one neuron dies, some of its properties could have been encoded in other EM-waves) appeared to provide a possible physical basis for the brain’s hypothesized holonomic properties.

If electric fields are the physically continuous process that allows for continuity of consciousness (i.e., theories of emergence), then this suggests that computational substrates instantiating consciousness need to exhibit similar properties. This is not a form of vitalism, because I am not postulating that some extra-physical (i.e., metaphysical) process instantiates consciousness, but rather that a material aspect does, and that such an aspect may have to be incorporated in any attempts at gradual substrate replacement meant to retain subjective-continuity through the procedure. It is not a matter of simulating the emergent electric fields using normative computational hardware, because it is not that the electric fields provide the functionality needed, or implement some salient aspect of computation that would otherwise be left out, but rather that the emergent EM fields form a physical basis for continuity and emergence unrelated to functionality but imperative to experiential-continuity or subjectivity—which I distinguish from the type of subjective-continuity thus far discussed, that is, of a feeling of being the same person through the process of gradual substrate replacement—via the term “immediate subjective-continuity”, as opposed to “temporal subjective-continuity”. Immediate subjective-continuity is the capacity to feel, period. Temporal subjective-continuity is the state of feeling like the same person you were. Thus while temporal subjective-continuity inherently necessitates immediate subjective-continuity, immediate subjective-continuity does not require temporal subjective-continuity as a fundamental prerequisite.

Thus I explored variations of NRU-operational-modality that incorporate this (i.e., prosthetics on the cellular scale) particularly the informational-functionalist (i.e., computational) NRUs, as the physical-functionalist NRUs were presumed to instantiate these same emergent fields via their normative operation. The approach consisted of either (a) translating the informational output of the models into the generation of physical fields (either at the end of the process, or throughout by providing the internal area or volume of the unit with a grid composed of electrically conductive nodes, such that the voltage patterns can be physically instantiated in temporal synchrony with the computational model, or (b) constructing the computational substrate instantiating the computational model so as to generate emergent electric fields in a manner as consistent with biological operation as possible (e.g., in the brain a given neuron is never in an electrically neutral state, never completely off, but rather always in a range of values between on and off [see Chapter 2], which means that there is never a break — i.e., spatiotemporal region of discontinuity — in its emergent electric fields; these operational properties would have to be replicated by any computational substrate used to replicate biological neurons via the informationalist-functionalist approach, if the premises that it facilitates immediate subjective-continuity are correct).

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the fifth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first four chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, and “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“.
***

Morphological Changes for Neural Plasticity

The finished physical-functionalist units would need the ability to change their emergent morphology not only for active modification of single-neuron functionality but even for basic functional replication of normative neuron behavior, by virtue of needing to take into account neural plasticity and the way that morphological changes facilitate learning and memory. My original approach involved the use of retractable, telescopic dendrites and axons (with corresponding internal retractable and telescopic dendritic spines and axonal spines, respectively) activated electromechanically by the unit-CPU. For morphological changes, by providing the edges of each membrane section with an electromechanical hinged connection (i.e., a means of changing the angle of inclination between immediately adjacent sections), the emergent morphology can be controllably varied. This eventually developed to consist of an internal compartment designed so as to detach a given membrane section, move it down into the internal compartment of the neuronal soma or terminal, transport it along a track that stores alternative membrane sections stacked face-to-face (to compensate for limited space), and subsequently replaces it with a membrane section containing an alternate functional component (e.g., ion pump, ion channel, [voltage-gated or ligand-gated], etc.) embedded therein. Note that this approach was also conceived of as an alternative to retractable axons/dendrites and axonal/dendritic spines, by attaching additional membrane sections with a very steep angle of inclination (or a lesser inclination with a greater quantity of segments) and thereby creating an emergent section of artificial membrane that extends out from the biological membrane in the same way as axons and dendrites.

However, this approach was eventually supplemented by one that necessitates less technological infrastructure (i.e., that was simpler and thus more economical and realizable). If the size of the integral-membrane components is small enough (preferably smaller than their biological analogues), then differential activation of components or membrane sections would achieve the same effect as changing the organization or type of integral-membrane components, effectively eliminating the need at actually interchange membrane sections at all.

Active Neuronal Modulation and Modification

The technological and methodological infrastructure used to facilitate neural plasticity can also be used for active modification and modulation of neural behavior (and the emergent functionality determined by local neuronal behavior) towards the aim of mental augmentation and modification. Potential uses already discussed include mental amplification (increasing or augmenting existing functional modalities—i.e., intelligence, emotion, morality), or mental augmentation (the creation of categorically new functional and experiential modalities). While the distinction between modification and modulation isn’t definitive, a useful way of differentiating them is to consider modification as morphological changes creating new functional modalities, and to consider modulation as actively varying the operation of existing structures/processes through not morphological change but rather changes to the operation of integral-membrane components or the properties of the local environment (e.g., increasing local ionic concentrations).

Modulation: A Less Discontinuous Alternative to Morphological Modification

The use of modulation to achieve the effective results of morphological changes seemed like a hypothetically less discontinuous alternative to morphological changes (and thus as having a hypothetically greater probability of achieving subjective-continuity). I’m more dubious in regards to the validity of this approach now, because the emergent functionality (normatively determined by morphological features) is still changed in an effectively equivalent manner.

The Eventual Replacement of Neural Ionic Solutions with Direct Electric Fields

Upon full gradual replacement of the CNS with physical-functionalist equivalents, the preferred embodiment consisted of replacing the ionic solutions with electric fields that preserve the electric potential instantiated by the difference in ionic concentrations on the respective sides of the membrane. Such electric fields can be generated directly, without recourse to electrochemicals for manifesting them. In such a case the integral-membrane components would be replaced by a means of generating and maintaining a static and/or dynamic electric field on either side of the membrane, or even merely of generating an electrical potential (i.e., voltage—a broader category encompassing electric fields) with solid-state electronics.

This procedure would allow a fraction of the speedups (that is, increased rate of subjective perception of time, which extends to speed of thought) resulting from emulatory (i.e., strictly computational) replication-methods by no longer being limited to the rate of passive ionic diffusion—now instead being limited to the propagation velocity of electric or electromagnetic fields.

Wireless Synapses

If we replace the physical synaptic connections the NRU uses to communicate (with both existing biological neurons and with other NRUs) with a wireless means of synaptic-transmission, we can preserve the same functionality (insofar as it is determined by synaptic connectivity) while allowing any NRU to communicate with any other NRU or biological neuron in the brain at potentially equal speed. First we need a way of converting the output of an NRU or biological neuron into information that can be transmitted wirelessly. For cyber-physicalist-functionalist NRUs, regardless of their sub-class, this requires no new technological infrastructure because they already deal with 2nd-order (i.e., not structurally or directly embodied) information; informational-functional NRU deals solely in terms of this type of information, and the cyber-physical-systems sub-class of the physicalist-functionalist NRUs deal with this kind of information in the intermediary stage between sensors and actuators—and consequently, converting what would have been a sequence of electromechanical actuations into information isn’t a problem. Only the passive-physicalist-functionalist NRU class requires additional technological infrastructure to accomplish this, because they don’t already use computational operational-modalities for their normative operation, whereas the other NRU classes do.

We dispose receivers within the range of every neuron (or alternatively NRU) in the brain, connected to actuators – the precise composition of which depends on the operational modality of the receiving biological neuron or NRU. The receiver translates incoming information into physical actuations (e.g., the release of chemical stores), thereby instantiating that informational output in physical terms. For biological neurons, the receiver’s actuators would consist of a means of electrically stimulating the neuron and releasable chemical stores of neurotransmitters (or ionic concentrations as an alternate means of electrical stimulation via the manipulation of local ionic concentrations). For informational-functionalist NRUs, the information is already in a form it can accept; it can simply integrate that information into its extant model. For cyber-physicalist-NRUs, the unit’s CPU merely needs to be able to translate that information into the sequence in which it must electromechanically actuate its artificial ion-channels. For the passive-physicalist (i.e., having no computational hardware devoted to operating individual components at all, operating according to physical feedback between components alone) NRUs, our only option appears to be translating received information into the manipulation of the local environment to vicariously affect the operation of the NRU (e.g., increasing electric potential through manipulations of local ionic concentrations, or increasing the rate of diffusion via applied electric fields to attract ions and thus achieve the same effect as a steeper electrochemical gradient or potential-difference).

The technological and methodological infrastructure for this is very similar to that used for the “integrational NRUs”, which allows a given NRU-class to communicate with either existing biological neurons or NRUs of an alternate class.

Integrating New Neural Nets Without Functional Distortion of Existing Regions

The use of artificial neural networks (which here will designate NRU-networks that do not replicate any existing biological neurons, rather than the normative Artificial Neuron Networks mentioned in the first and second parts of this essay), rather than normative neural prosthetics and BCI, was the preferred method of cognitive augmentation (creation of categorically new functional/experiential modalities) and cognitive amplification (the extension of existing functional/experiential modalities). Due to functioning according to the same operational modality as existing neurons (whether biological or artificial-replacements), they can become a continuous part of our “selves”, whereas normative neural prosthetics and BCI are comparatively less likely to be capable of becoming an integral part of our experiential continuum (or subjective sense of self) due to their significant operational dissimilarity in relation to biological neural networks.

A given artificial neural network can be integrated with existing biological networks in a few ways. One is interior integration, wherein the new neural network is integrated so as to be “inter-threaded”, in which a given artificial-neuron is placed among one or multiple existing networks. The networks are integrated and connected on a very local level. In “anterior” integration, the new network would be integrated in a way comparable to the connection between separate cortical columns, with the majority of integration happening at the peripherals of each respective network or cluster.

If the interior integration approach is used then the functionality of the region may be distorted or negated by virtue of the fact that neurons that once took a certain amount of time to communicate now take comparatively longer due to the distance between them having been increased to compensate for the extra space necessitated by the integration of the new artificial neurons. Thus in order to negate these problematizing aspects, a means of increasing the speed of communication (determined by both [a] the rate of diffusion across the synaptic junction and [b] the rate of diffusion across the neuronal membrane, which in most cases is synonymous with the propagation velocity in the membrane – the exception being myelinated axons, wherein a given action potential “jumps” from node of Ranvier to node of Ranvier; in these cases propagation velocity is determined by the thickness and length of the myelinated sections) must be employed.

My original solution was the use of an artificial membrane morphologically modeled on a myelinated axon that possesses very high capacitance (and thus high propagation velocity), combined with increasing the capacitance of the existing axon or dendrite of the biological neuron. The cumulative capacitance of both is increased in proportion to how far apart they are moved. In this way, the propagation velocity of the existing neuron and the connector-terminal are increased to allow the existing biological neurons to communicate as fast as they would have prior to the addition of the artificial neural network. This solution was eventually supplemented by the wireless means of synaptic transmission described above, which allows any neuron to communicate with any other neuron at equal speed.

Gradually Assigning Operational Control of a Physical NRU to a Virtual NRU

This approach allows us to apply the single-neuron gradual replacement facilitated by the physical-functionalist NRU to the informational-functionalist (physically embodied) NRU. A given section of artificial membrane and its integral membrane components are modeled. When this model is functioning in parallel (i.e., synchronization of operative states) with its corresponding membrane section, the normative operational routines of that artificial membrane section (usually controlled by the unit’s CPU and its programming) are subsequently taken over by the computational model—i.e., the physical operation of the artificial membrane section is implemented according to and in correspondence with the operative states of the model. This is done iteratively, with the informationalist-functionalist NRU progressively controlling more and more sections of the membrane until the physical operation of the whole physical-functionalist NRU is controlled by the informational operative states of the informationalist-functionalist NRU. While this concept sprang originally from the approach of using multiple gradual-replacement phases (with a class of model assigned to each phase, wherein each is more dissimilar to the original than the preceding phase, thereby increasing the cumulative degree of graduality), I now see it as a way of facilitating sub-neuron gradual replacement in computational NRUs. Also note that this approach can be used to go from existing biological membrane-sections to a computational NRU, without a physical-functionalist intermediary stage. This, however, is comparatively more complex because the physical-functionalist NRU already has a means of modulating its operative states, whereas the biological neuron does not. In such a case the section of lipid bilayer membrane would presumably have to be operationally isolated from adjacent sections of membrane, using a system of chemical inventories (of either highly concentrated ionic solution or neurotransmitters, depending on the area of membrane) to produce electrochemical output and chemical sensors to accept the electrochemical input from adjacent sections (i.e., a means of detecting depolarization and hyperpolarization). Thus to facilitate an action potential, for example, the chemical sensors would detect depolarization, the computational NRU would then model the influx of ions through the section of membrane it is replacing and subsequently translate the effective results impinging upon the opposite side to that opposite edge via either the release of neurotransmitters or the manipulation of local ionic concentrations so as to generate the required depolarization at the adjacent section of biological membrane.

Integrational NRU

This consisted of a unit facilitating connection between emulatory (i.e., informational-functionalist) units and existing biological neurons. The output of the emulatory units is converted into chemical and electrical output at the locations where the emulatory NRU makes synaptic connection with other biological neurons, facilitated through electric stimulation or the release of chemical inventories for the increase of ionic concentrations and the release of neurotransmitters, respectively. The input of existing biological neurons making synaptic connections with the emulatory NRU is read, likewise, by chemical and electrical sensors and is converted into informational input that corresponds to the operational modality of the informationalist-functionalist NRU classes.

Solutions to Scale

If we needed NEMS or something below the scale of the present state of MEMS for the technological infrastructure of either (a) the electromechanical systems replicating a given section of neuronal membrane, or (b) the systems used to construct and/or integrate the sections, or those used to remove or otherwise operationally isolate the existing section of lipid bilayer membrane being replaced from adjacent sections, a postulated solution consisted of taking the difference in length between the artificial membrane section and the existing bilipid section (which difference is determined by how small we can construct functionally operative artificial ion-channels) and incorporating this as added curvature in the artificial membrane-section such that its edges converge upon or superpose with the edges of the space left by the removal the lipid bilayer membrane-section. We would also need to increase the propagation velocity (typically determined by the rate of ionic influx, which in turn is typically determined by the concentration gradient or difference in the ionic concentrations on the respective sides of the membrane) such that the action potential reaches the opposite end of the replacement section at the same time that it would normally have via the lipid bilayer membrane. This could be accomplished directly by the application of electric fields with a charge opposite that of the ions (which would attract them, thus increasing the rate of diffusion), by increasing the number of open channels or the diameter of existing channels, or simply by increasing the concentration gradient through local manipulation of extracellular and/or intracellular ionic concentration—e.g., through concentrated electrolyte stores of the relevant ion that can be released to increase the local ionic concentration.

If the degree of miniaturization is so low as to make this approach untenable (e.g., increasing curvature still doesn’t allow successful integration) then a hypothesized alternative approach was to increase the overall space between adjacent neurons, integrate the NRU, and replace normative connection with chemical inventories (of either ionic compound or neurotransmitter) released at the site of existing connection, and having the NRU (or NRU sub-section—i.e., artificial membrane section) wirelessly control the release of such chemical inventories according to its operative states.

The next chapter describes (a) possible physical bases for subjective-continuity through a gradual-uploading procedure and (b) possible design requirements for in vivo brain-scanning and for systems to construct and integrate the prosthetic neurons with the existing biological brain.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Project Avatar (2011). Retrieved February 28, 2013 from http://2045.com/tech2/

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 19, 2013
******************************
This essay is the fourth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first three chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, and “Concepts for Functional Replication of Biological Neurons“.
***

Gradual Uploading Applied to Single Neurons (2008)

In early 2008 I was trying to conceptualize a means of applying the logic of gradual replacement to single neurons under the premise that extending the scale of gradual replacement to individual sections of the neuronal membrane and its integral membrane proteins—thus increasing the degree of graduality between replacement sections—would increase the likelihood of subjective-continuity through substrate transfer. I also started moving away from the use of normative nanotechnology as the technological and methodological infrastructure for the NRUs, as it would delay the date at which these systems could be developed and experimentally verified. Instead I started focusing on conceptualizing systems that electromechanically replicate the functional modalities of the small-scale integral-membrane-components of the neuron. I was calling this approach the “active mechanical membrane” to differentiate it from the electro-chemical-mechanical modalities of the nanotech approach. I also started using MEMS rather than NEMS for the underlying technological infrastructure (because MEMS are less restrictive) while identifying NEMS as preferred.

I felt that trying to replicate the metabolic replacement rate in biological neurons should be the ideal to strive for, since we know that subjective-continuity is preserved through the gradual metabolic replacement (a.k.a. molecular-turnover) that occurs in the existing biological brain. My approach was to measure the normal rate of metabolic replacement in existing biological neurons and the scale at which such replacement occurs (i.e., are the sections being replaced metabolically with single molecules, molecular complexes, or whole molecular clusters?). Then, when replacing sections of the membrane with electromechanical functional equivalents, the same ratio of replacement-section size to replacement-time factor would be applied—that is, the time between sectional replacement would be increased in proportion to how much larger the sectional-replacement section/scale is compared to the existing scale of metabolic replacement-sections/scale. Replacement size/scale is defined as the size of the section being replaced—and so would be molecular complexes in the case of normative metabolic replacement. Replacement time is defined as the interval of time between a given section being replaced and a section that it has causal connection with is replaced; in metabolic replacement it is the time interval between a given molecular complex being replaced and an adjacent (or directly-causally-connected) molecular complex being replaced.

I therefore posited the following formula:

 Ta = (Sa/Sb)*Tb,

where Sa is the size of the artificial-membrane-replacement sections, Sb is the size of the metabolic replacement sections, Tb is the time interval between the metabolic replacement of two successive metabolic replacement sections, and Ta is the time interval needing to be applied to the comparatively larger artificial-membrane-replacement sections so as to preserve the same replacement-rate factor (and correspondingly the same degree of graduality) that exists in normative metabolic replacement through the process of gradual replacement on the comparatively larger scale of the artificial-membrane sections.

The use of the time-to-scale factor corresponding with normative molecular turnover or “metabolic replacement” follows from the fact that we know subjective-continuity through substrate replacement is successful at this time-to-scale ratio. However, the lack of a non-arbitrarily quantifiable measure of time and the fact that that time is infinitely divisible (i.e., it can be broken down into smaller intervals to an arbitrarily large degree) logically necessitates that the salient variable is not time, but rather causal interaction between co-affective or “causally coupled” components. Interaction between components and the state transitions each component or procedural step undergo are the only viable quantifiable measures of time. Thus, while time is the relevant variable in the above equation, a better (i.e., more methodologically rigorous) variable would be a measure of either (a) the number of causal interactions occurring between co-affective or “adjacent” components within the interval of replacement time Ta, which is synonymous with the frequency of causal interaction; or (b) the number of state-transitions a given component undergoes within the interval of time Ta. While they should be generally correlative, in that state-transitions are facilitated via causal interaction among components, state-transitions may be a better metric because they allow us to quantitatively compare categorically dissimilar types of causal interaction that otherwise couldn’t be summed into a single variable or measure. For example, if one type of molecular interaction has a greater effect on the state-transitions of either component involved (i.e., facilitates a comparatively greater state-transition) than does another type of molecular interaction, then quantifying a measure of causal interactions may be less accurate than quantifying a measure of the magnitude of state-transitions.

In this way the rate of gradual replacement, despite being on a scale larger than normative metabolic replacement, would hypothetically follow the same degree of graduality with which biological metabolic replacement occurs. This was meant to increase the likelihood of subjective-continuity through a substrate-replacement procedure (both because it is necessarily more gradual than gradual replacement of whole individual neurons at a time, and because it preserves the degree of graduality that exists through the normative metabolic replacement that we already undergo).

Replicating Neuronal Membrane and Integral Membrane Components

Thus far there have been 2 main classes of neuron-replication approach identified: informational-functionalist and physical-functionalist, the former corresponding to computational and simulation/emulation approaches and the latter to physically embodied, “prosthetic” approaches.

The physicalist-functionalist approach, however, can at this point be further sub-divided into two sub-classes. The first can be called “cyber-physicalist-functionalist”, which involves controlling the artificial ion-channels and receptor-channels via normative computation (i.e., an internal CPU or controller-circuit) operatively connected to sensors and to the electromechanical actuators and components of the ion and receptor channels (i.e., sensing the presence of an electrochemical gradient or difference in electrochemical potential [equivalent to relative ionic concentration] between the respective sides of a neuronal membrane, and activating the actuators of the artificial channels to either open or remain closed, based upon programmed rules). This sub-class is an example of a cyber-physical system, which designates any system with a high level of connection or interaction between its physical and computational components, itself a class of technology that grew out of embedded systems, which designates any system using embedded computational technology and includes many electronic devices and appliances.

This is one further functional step removed from the second approach, which I was then simply calling the “direct” method, but which would be more accurately called the passive-physicalist-functionalist approach. Electronic systems are differentiated from electric systems by being active (i.e., performing computation or more generally signal-processing), whereas electric systems are passive and aren’t meant to transform (i.e., process) incoming signals (though any computational system’s individual components must at some level be comprised of electric, passive components). Whereas the cyber-physicalist-functionalist sub-class has computational technology controlling its processes, the passive-physicalist-functionalist approach has components emergently constituting a computational device. This consisted of providing the artificial ion-channels with a means of opening in the presence of a given electric potential difference (i.e., voltage) and the receptor-channels with a means of opening in response to the unique attributes of the neurotransmitter it corresponds to (such as chemical bonding as in ligand-based receptors, or alternatively in response to its electrical properties in the same manner – i.e., according to the same operational-modality – as the artificial ion channels), without a CPU correlating the presence of an attribute measured by sensors with the corresponding electromechanical behavior of the membrane needing to be replicated in response thereto. Such passive systems differ from computation in that they only require feedback between components, wherein a system of mechanical, electrical, or electromechanical components is operatively connected so as to produce specific system-states or processes in response to the presence of specific sensed system-states of its environment or itself. An example of this in regards to the present case would be constructing an ionic channel from piezoelectric materials, such that the presence of a certain electrochemical potential induces internal mechanical strain in the material; the spacing, dimensions and quantity of segments would be designed so as to either close or open, respectively, as a single unit when eliciting internal mechanical strain in response to one electrochemical potential while remaining unresponsive (or insufficiently responsive—i.e., not opening all the way) to another electrochemical potential. Biological neurons work in a similarly passive way, in which systems are organized to exhibit specific responses to specific stimuli in basic stimulus-response causal sequences by virtue of their own properties rather than by external control of individual components via CPU.

However, I found the cyber-physicalist approach preferable if it proved to be sufficient due to the ability to reprogram computational systems, which isn’t possible in passive systems without necessitating a reorganization of the component—which itself necessitates an increase in the required technological infrastructure, thereby increasing cost and design-requirements. This limit on reprogramming also imposes a limit on our ability to modify and modulate the operation of the NRUs (which will be necessary to retain the function of neural plasticity—presumably a prerequisite for experiential subjectivity and memory). The cyber-physicalist approach also seemed preferable due to a larger degree of variability in its operation: it would be easier to operatively connect electromechanical membrane components (e.g., ionic channels, ion pumps) to a CPU, and through the CPU to sensors, programming it to elicit a specific sequence of ionic-channel opening and closing in response to specific sensor-states, than it would be to design artificial ionic channels to respond directly to the presence of an electric potential with sufficient precision and accuracy.

In the cyber-physicalist-functionalist approach the membrane material is constructed so as to be (a) electrically insulative, while (b) remaining thin enough to act as a capacitor via the electric potential differential (which is synonymous with voltage) between the two sides of the membrane.

The ion-channel replacement units consisted of electromechanical pores that open for a fixed amount of time in the presence of an ion gradient (a difference in electric potential between the two sides of the membrane); this was to be accomplished electromechanically via a means of sensing membrane depolarization (such as through the use of reference electrodes) connected to a microcircuit (or nanocircuit, hereafter referred to as a CPU) programmed to open the electromechanical ion-channels for a length of time corresponding to the rate of normative biological repolarization (i.e., the time it takes to restore the membrane polarization to the resting-membrane-potential following an action-potential), thus allowing the influx of ions at a rate equal to the biological ion-channels. Likewise sections of the pre-synaptic membrane were to be replaced by a section of inorganic membrane containing units that sense the presence of the neurotransmitter corresponding to the receptor being replaced, which were to be connected to a microcircuit programmed to elicit specific changes (i.e., increase or decrease in ionic permeability, such as through increasing or decreasing the diameter of ion-channels—e.g., through an increase or decrease in electric stimulation of piezoelectric crystals, as described above—or an increase or decrease in the number of open channels) corresponding to the change in postsynaptic potential in the biological membrane resulting from postsynaptic receptor-binding. This requires a bit more technological infrastructure than I anticipated the ion-channels requiring.

While the accurate and active detection of particular types and relative quantities of neurotransmitters is normally ligand-gated, we have a variety of potential, mutually exclusive approaches. For ligand-based receptors, sensing the presence and steepness of electrochemical gradients may not suffice. However, we don’t necessarily have to use ligand-receptor fitting to replicate the functionality of ligand-based receptors. If there is a difference in the charge (i.e., valence) between the neurotransmitter needing to be detected and other neurotransmitters, and the degree of that difference is detectable given the precision of our sensing technologies, then a means of sensing a specific charge may prove sufficient. I developed an alternate method for ligand-based receptor fitting in the event that sensing-electric charge proved insufficient, however. Different chemicals (e.g., neurotransmitters, but also potentially electrolyte solutions) have different volume-to-weight ratios. We equip the artificial-membrane sections with an empty compartment capable of measuring the weight of its contents. Since the volume of the container is already known, this would allow us to identify specific neurotransmitters (or other relevant molecules and compounds) based on their unique weight-to-volume ratio. By operatively connecting the unit’s CPU to this sensor, we can program specific operations (i.e., receptor opens allowing entry for fixed amount of time, or remains closed) in response to the detection of specific neurotransmitters. Though it is unlikely to be necessitated, this method could also work for the detection of specific ions, and thus could work as the operating mechanism underlying the artificial ion-channels as well—though this would probably require higher-precision volume-to-weight comparison than is required for neurotransmitters.

Sectional Integration with Biological Neurons

Integrating replacement-membrane sections with adjacent sections of the existing lipid bilayer membrane becomes a lot less problematic if the scale at which the membrane sections are handled (determined by the size of the replacement membrane sections) is homogenous, as in the case of biological tissues, rather than molecularly heterogeneous—that is, if we are affixing the edges to a biological tissue, rather than to complexes of individual lipid molecules. Reasons for hypothesizing a higher probability for homogeneity at the replacement scale include (a) the ability of experimenters and medical researchers to puncture the neuronal membrane with a micropipette (so as to measure membrane voltage) without rupturing the membrane beyond functionality, and (b) the fact that sodium and potassium ions do not leak through the gaps between the individual bilipid molecules, which would be present if it were heterogeneous at this scale. If we find homogeneity at the scale of sectional replacement, we can use more normative means of affixing the edges of the replacement membrane section with the existing lipid bilayer membrane, such as micromechanical fasteners, adhesive, or fusing via heating or energizing. However, I also developed an approach applicable if the scale of sectional replacement was found to be molecular and thus heterogeneous. We find an intermediate chemical that stably bonds to both the bilipid molecules constituting the membrane and the molecules or compounds constituting the artificial membrane section. Note that if the molecules or compounds constituting either must be energized so as to put them in an abnormal (i.e., unstable) energy state to make them susceptible to bonding, this is fine so long as the energies don’t reach levels damaging to the biological cell (or if such energies could be absorbed prior to impinging upon or otherwise damaging the biological cell). If such an intermediate molecule or compound cannot be found, a second intermediate chemical that stably bonds with two alternate and secondary intermediate molecules (which themselves bond to either the biological membrane or the non-biological membrane section, respectively) can be used. The chances of finding a sequence of chemicals that stably bond (i.e., a given chemical forms stable bonds with the preceding and succeeding chemicals in the sequence) increases in proportion to the number of intermediate chemicals used. Note that it might be possible to apply constant external energization to certain molecules so as to force them to bond in the case that a stable bond cannot be formed, but this would probably be economically prohibitive and potentially dangerous, depending on the levels of energy and energization-precision.

I also worked on the means of constructing and integrating these components in vivo, using MEMS or NEMS. Most of the developments in this regard are described in the next chapter. However, some specific variations on construction procedure were necessitated by the sectional-integration procedure, which I will comment on here. The integration unit would position itself above the membrane section. Using the data acquired by the neuron data-measurement units, which specify the constituents of a given membrane section and assign it a number corresponding to a type of artificial-membrane section in the integration unit’s section-inventory (essentially a store of stacked artificial-membrane-sections). A means of disconnecting a section of lipid bilayer membrane from the biological neuron is depressed. This could be a hollow rectangular compartment with edges that sever the lipid bilayer membrane via force (e.g., edges terminate in blades), energy (e.g., edges terminate in heat elements), or chemical corrosion (e.g., edges coated with or secrete a corrosive substance). The detached section of lipid bilayer membrane is then lifted out and compacted, to be drawn into a separate compartment for storing waste organic materials. The artificial-membrane section is subsequently transported down through the same compartment. Since it is perpendicular to the face of the container, moving the section down through the compartment should force the intra-cellular fluid (which would have presumably leaked into the constructional container’s internal area when the lipid bilayer membrane-section was removed) back into the cell. Once the artificial-membrane section is in place, the preferred integration method is applied.

Sub-neuronal (i.e., sectional) replacement also necessitates that any dynamic patterns of polarization (e.g., an action potential) are continuated during the interval of time between section removal and artificial-section integration. This was to be achieved by chemical sensors (that detect membrane depolarization) operatively connected to actuators that manipulate ionic concentration on the other side of the membrane gap via the release or uptake of ions from biochemical inventories so as to induce membrane depolarization on the opposite side of the membrane gap at the right time. Such techniques as partially freezing the cell so as to slow the rate of membrane depolarization and/or the propagation velocity of action potentials were also considered.

The next chapter describes my continued work in 2008, focusing on (a) the design requirements for replicating the neural plasticity necessary for memory and subjectivity, (b) the active and conscious modulation and modification of neural operation, (c) wireless synaptic transmission, (d) on ways to integrate new neural networks (i.e., mental amplification and augmentation) without disrupting the operation of existing neural networks and regions, and (e) a gradual transition from or intermediary phase between the physical (i.e., prosthetic) approach and the informational (i.e., computational, or mind-uploading proper) approach.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Churchland, P. S. (1989). Neurophilosophy: Toward a Unified Science of the Mind/Brain.  MIT Press, p. 30.

Pribram, K. H. (1971). Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology. New York: Prentice Hall/Brandon House.

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf