This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.
In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.
First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.
I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).
At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons. Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.
Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).
The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.
I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.
Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.
Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.
It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.
Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.
While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.
This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.
I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE: that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.
There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.
This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.
Are Current Computational Paradigms Sufficient?
Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.
Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.
If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.
So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.
Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.
Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors. He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.
Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing
Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122
Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability
Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.
Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm
Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/
Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.
Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.
Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.
Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.
Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.
Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network
Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf