Browsed by
Tag: Franco Cortese

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the fifth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first four chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, and “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“.
***

Morphological Changes for Neural Plasticity

The finished physical-functionalist units would need the ability to change their emergent morphology not only for active modification of single-neuron functionality but even for basic functional replication of normative neuron behavior, by virtue of needing to take into account neural plasticity and the way that morphological changes facilitate learning and memory. My original approach involved the use of retractable, telescopic dendrites and axons (with corresponding internal retractable and telescopic dendritic spines and axonal spines, respectively) activated electromechanically by the unit-CPU. For morphological changes, by providing the edges of each membrane section with an electromechanical hinged connection (i.e., a means of changing the angle of inclination between immediately adjacent sections), the emergent morphology can be controllably varied. This eventually developed to consist of an internal compartment designed so as to detach a given membrane section, move it down into the internal compartment of the neuronal soma or terminal, transport it along a track that stores alternative membrane sections stacked face-to-face (to compensate for limited space), and subsequently replaces it with a membrane section containing an alternate functional component (e.g., ion pump, ion channel, [voltage-gated or ligand-gated], etc.) embedded therein. Note that this approach was also conceived of as an alternative to retractable axons/dendrites and axonal/dendritic spines, by attaching additional membrane sections with a very steep angle of inclination (or a lesser inclination with a greater quantity of segments) and thereby creating an emergent section of artificial membrane that extends out from the biological membrane in the same way as axons and dendrites.

However, this approach was eventually supplemented by one that necessitates less technological infrastructure (i.e., that was simpler and thus more economical and realizable). If the size of the integral-membrane components is small enough (preferably smaller than their biological analogues), then differential activation of components or membrane sections would achieve the same effect as changing the organization or type of integral-membrane components, effectively eliminating the need at actually interchange membrane sections at all.

Active Neuronal Modulation and Modification

The technological and methodological infrastructure used to facilitate neural plasticity can also be used for active modification and modulation of neural behavior (and the emergent functionality determined by local neuronal behavior) towards the aim of mental augmentation and modification. Potential uses already discussed include mental amplification (increasing or augmenting existing functional modalities—i.e., intelligence, emotion, morality), or mental augmentation (the creation of categorically new functional and experiential modalities). While the distinction between modification and modulation isn’t definitive, a useful way of differentiating them is to consider modification as morphological changes creating new functional modalities, and to consider modulation as actively varying the operation of existing structures/processes through not morphological change but rather changes to the operation of integral-membrane components or the properties of the local environment (e.g., increasing local ionic concentrations).

Modulation: A Less Discontinuous Alternative to Morphological Modification

The use of modulation to achieve the effective results of morphological changes seemed like a hypothetically less discontinuous alternative to morphological changes (and thus as having a hypothetically greater probability of achieving subjective-continuity). I’m more dubious in regards to the validity of this approach now, because the emergent functionality (normatively determined by morphological features) is still changed in an effectively equivalent manner.

The Eventual Replacement of Neural Ionic Solutions with Direct Electric Fields

Upon full gradual replacement of the CNS with physical-functionalist equivalents, the preferred embodiment consisted of replacing the ionic solutions with electric fields that preserve the electric potential instantiated by the difference in ionic concentrations on the respective sides of the membrane. Such electric fields can be generated directly, without recourse to electrochemicals for manifesting them. In such a case the integral-membrane components would be replaced by a means of generating and maintaining a static and/or dynamic electric field on either side of the membrane, or even merely of generating an electrical potential (i.e., voltage—a broader category encompassing electric fields) with solid-state electronics.

This procedure would allow a fraction of the speedups (that is, increased rate of subjective perception of time, which extends to speed of thought) resulting from emulatory (i.e., strictly computational) replication-methods by no longer being limited to the rate of passive ionic diffusion—now instead being limited to the propagation velocity of electric or electromagnetic fields.

Wireless Synapses

If we replace the physical synaptic connections the NRU uses to communicate (with both existing biological neurons and with other NRUs) with a wireless means of synaptic-transmission, we can preserve the same functionality (insofar as it is determined by synaptic connectivity) while allowing any NRU to communicate with any other NRU or biological neuron in the brain at potentially equal speed. First we need a way of converting the output of an NRU or biological neuron into information that can be transmitted wirelessly. For cyber-physicalist-functionalist NRUs, regardless of their sub-class, this requires no new technological infrastructure because they already deal with 2nd-order (i.e., not structurally or directly embodied) information; informational-functional NRU deals solely in terms of this type of information, and the cyber-physical-systems sub-class of the physicalist-functionalist NRUs deal with this kind of information in the intermediary stage between sensors and actuators—and consequently, converting what would have been a sequence of electromechanical actuations into information isn’t a problem. Only the passive-physicalist-functionalist NRU class requires additional technological infrastructure to accomplish this, because they don’t already use computational operational-modalities for their normative operation, whereas the other NRU classes do.

We dispose receivers within the range of every neuron (or alternatively NRU) in the brain, connected to actuators – the precise composition of which depends on the operational modality of the receiving biological neuron or NRU. The receiver translates incoming information into physical actuations (e.g., the release of chemical stores), thereby instantiating that informational output in physical terms. For biological neurons, the receiver’s actuators would consist of a means of electrically stimulating the neuron and releasable chemical stores of neurotransmitters (or ionic concentrations as an alternate means of electrical stimulation via the manipulation of local ionic concentrations). For informational-functionalist NRUs, the information is already in a form it can accept; it can simply integrate that information into its extant model. For cyber-physicalist-NRUs, the unit’s CPU merely needs to be able to translate that information into the sequence in which it must electromechanically actuate its artificial ion-channels. For the passive-physicalist (i.e., having no computational hardware devoted to operating individual components at all, operating according to physical feedback between components alone) NRUs, our only option appears to be translating received information into the manipulation of the local environment to vicariously affect the operation of the NRU (e.g., increasing electric potential through manipulations of local ionic concentrations, or increasing the rate of diffusion via applied electric fields to attract ions and thus achieve the same effect as a steeper electrochemical gradient or potential-difference).

The technological and methodological infrastructure for this is very similar to that used for the “integrational NRUs”, which allows a given NRU-class to communicate with either existing biological neurons or NRUs of an alternate class.

Integrating New Neural Nets Without Functional Distortion of Existing Regions

The use of artificial neural networks (which here will designate NRU-networks that do not replicate any existing biological neurons, rather than the normative Artificial Neuron Networks mentioned in the first and second parts of this essay), rather than normative neural prosthetics and BCI, was the preferred method of cognitive augmentation (creation of categorically new functional/experiential modalities) and cognitive amplification (the extension of existing functional/experiential modalities). Due to functioning according to the same operational modality as existing neurons (whether biological or artificial-replacements), they can become a continuous part of our “selves”, whereas normative neural prosthetics and BCI are comparatively less likely to be capable of becoming an integral part of our experiential continuum (or subjective sense of self) due to their significant operational dissimilarity in relation to biological neural networks.

A given artificial neural network can be integrated with existing biological networks in a few ways. One is interior integration, wherein the new neural network is integrated so as to be “inter-threaded”, in which a given artificial-neuron is placed among one or multiple existing networks. The networks are integrated and connected on a very local level. In “anterior” integration, the new network would be integrated in a way comparable to the connection between separate cortical columns, with the majority of integration happening at the peripherals of each respective network or cluster.

If the interior integration approach is used then the functionality of the region may be distorted or negated by virtue of the fact that neurons that once took a certain amount of time to communicate now take comparatively longer due to the distance between them having been increased to compensate for the extra space necessitated by the integration of the new artificial neurons. Thus in order to negate these problematizing aspects, a means of increasing the speed of communication (determined by both [a] the rate of diffusion across the synaptic junction and [b] the rate of diffusion across the neuronal membrane, which in most cases is synonymous with the propagation velocity in the membrane – the exception being myelinated axons, wherein a given action potential “jumps” from node of Ranvier to node of Ranvier; in these cases propagation velocity is determined by the thickness and length of the myelinated sections) must be employed.

My original solution was the use of an artificial membrane morphologically modeled on a myelinated axon that possesses very high capacitance (and thus high propagation velocity), combined with increasing the capacitance of the existing axon or dendrite of the biological neuron. The cumulative capacitance of both is increased in proportion to how far apart they are moved. In this way, the propagation velocity of the existing neuron and the connector-terminal are increased to allow the existing biological neurons to communicate as fast as they would have prior to the addition of the artificial neural network. This solution was eventually supplemented by the wireless means of synaptic transmission described above, which allows any neuron to communicate with any other neuron at equal speed.

Gradually Assigning Operational Control of a Physical NRU to a Virtual NRU

This approach allows us to apply the single-neuron gradual replacement facilitated by the physical-functionalist NRU to the informational-functionalist (physically embodied) NRU. A given section of artificial membrane and its integral membrane components are modeled. When this model is functioning in parallel (i.e., synchronization of operative states) with its corresponding membrane section, the normative operational routines of that artificial membrane section (usually controlled by the unit’s CPU and its programming) are subsequently taken over by the computational model—i.e., the physical operation of the artificial membrane section is implemented according to and in correspondence with the operative states of the model. This is done iteratively, with the informationalist-functionalist NRU progressively controlling more and more sections of the membrane until the physical operation of the whole physical-functionalist NRU is controlled by the informational operative states of the informationalist-functionalist NRU. While this concept sprang originally from the approach of using multiple gradual-replacement phases (with a class of model assigned to each phase, wherein each is more dissimilar to the original than the preceding phase, thereby increasing the cumulative degree of graduality), I now see it as a way of facilitating sub-neuron gradual replacement in computational NRUs. Also note that this approach can be used to go from existing biological membrane-sections to a computational NRU, without a physical-functionalist intermediary stage. This, however, is comparatively more complex because the physical-functionalist NRU already has a means of modulating its operative states, whereas the biological neuron does not. In such a case the section of lipid bilayer membrane would presumably have to be operationally isolated from adjacent sections of membrane, using a system of chemical inventories (of either highly concentrated ionic solution or neurotransmitters, depending on the area of membrane) to produce electrochemical output and chemical sensors to accept the electrochemical input from adjacent sections (i.e., a means of detecting depolarization and hyperpolarization). Thus to facilitate an action potential, for example, the chemical sensors would detect depolarization, the computational NRU would then model the influx of ions through the section of membrane it is replacing and subsequently translate the effective results impinging upon the opposite side to that opposite edge via either the release of neurotransmitters or the manipulation of local ionic concentrations so as to generate the required depolarization at the adjacent section of biological membrane.

Integrational NRU

This consisted of a unit facilitating connection between emulatory (i.e., informational-functionalist) units and existing biological neurons. The output of the emulatory units is converted into chemical and electrical output at the locations where the emulatory NRU makes synaptic connection with other biological neurons, facilitated through electric stimulation or the release of chemical inventories for the increase of ionic concentrations and the release of neurotransmitters, respectively. The input of existing biological neurons making synaptic connections with the emulatory NRU is read, likewise, by chemical and electrical sensors and is converted into informational input that corresponds to the operational modality of the informationalist-functionalist NRU classes.

Solutions to Scale

If we needed NEMS or something below the scale of the present state of MEMS for the technological infrastructure of either (a) the electromechanical systems replicating a given section of neuronal membrane, or (b) the systems used to construct and/or integrate the sections, or those used to remove or otherwise operationally isolate the existing section of lipid bilayer membrane being replaced from adjacent sections, a postulated solution consisted of taking the difference in length between the artificial membrane section and the existing bilipid section (which difference is determined by how small we can construct functionally operative artificial ion-channels) and incorporating this as added curvature in the artificial membrane-section such that its edges converge upon or superpose with the edges of the space left by the removal the lipid bilayer membrane-section. We would also need to increase the propagation velocity (typically determined by the rate of ionic influx, which in turn is typically determined by the concentration gradient or difference in the ionic concentrations on the respective sides of the membrane) such that the action potential reaches the opposite end of the replacement section at the same time that it would normally have via the lipid bilayer membrane. This could be accomplished directly by the application of electric fields with a charge opposite that of the ions (which would attract them, thus increasing the rate of diffusion), by increasing the number of open channels or the diameter of existing channels, or simply by increasing the concentration gradient through local manipulation of extracellular and/or intracellular ionic concentration—e.g., through concentrated electrolyte stores of the relevant ion that can be released to increase the local ionic concentration.

If the degree of miniaturization is so low as to make this approach untenable (e.g., increasing curvature still doesn’t allow successful integration) then a hypothesized alternative approach was to increase the overall space between adjacent neurons, integrate the NRU, and replace normative connection with chemical inventories (of either ionic compound or neurotransmitter) released at the site of existing connection, and having the NRU (or NRU sub-section—i.e., artificial membrane section) wirelessly control the release of such chemical inventories according to its operative states.

The next chapter describes (a) possible physical bases for subjective-continuity through a gradual-uploading procedure and (b) possible design requirements for in vivo brain-scanning and for systems to construct and integrate the prosthetic neurons with the existing biological brain.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Project Avatar (2011). Retrieved February 28, 2013 from http://2045.com/tech2/

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 19, 2013
******************************
This essay is the fourth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first three chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, and “Concepts for Functional Replication of Biological Neurons“.
***

Gradual Uploading Applied to Single Neurons (2008)

In early 2008 I was trying to conceptualize a means of applying the logic of gradual replacement to single neurons under the premise that extending the scale of gradual replacement to individual sections of the neuronal membrane and its integral membrane proteins—thus increasing the degree of graduality between replacement sections—would increase the likelihood of subjective-continuity through substrate transfer. I also started moving away from the use of normative nanotechnology as the technological and methodological infrastructure for the NRUs, as it would delay the date at which these systems could be developed and experimentally verified. Instead I started focusing on conceptualizing systems that electromechanically replicate the functional modalities of the small-scale integral-membrane-components of the neuron. I was calling this approach the “active mechanical membrane” to differentiate it from the electro-chemical-mechanical modalities of the nanotech approach. I also started using MEMS rather than NEMS for the underlying technological infrastructure (because MEMS are less restrictive) while identifying NEMS as preferred.

I felt that trying to replicate the metabolic replacement rate in biological neurons should be the ideal to strive for, since we know that subjective-continuity is preserved through the gradual metabolic replacement (a.k.a. molecular-turnover) that occurs in the existing biological brain. My approach was to measure the normal rate of metabolic replacement in existing biological neurons and the scale at which such replacement occurs (i.e., are the sections being replaced metabolically with single molecules, molecular complexes, or whole molecular clusters?). Then, when replacing sections of the membrane with electromechanical functional equivalents, the same ratio of replacement-section size to replacement-time factor would be applied—that is, the time between sectional replacement would be increased in proportion to how much larger the sectional-replacement section/scale is compared to the existing scale of metabolic replacement-sections/scale. Replacement size/scale is defined as the size of the section being replaced—and so would be molecular complexes in the case of normative metabolic replacement. Replacement time is defined as the interval of time between a given section being replaced and a section that it has causal connection with is replaced; in metabolic replacement it is the time interval between a given molecular complex being replaced and an adjacent (or directly-causally-connected) molecular complex being replaced.

I therefore posited the following formula:

 Ta = (Sa/Sb)*Tb,

where Sa is the size of the artificial-membrane-replacement sections, Sb is the size of the metabolic replacement sections, Tb is the time interval between the metabolic replacement of two successive metabolic replacement sections, and Ta is the time interval needing to be applied to the comparatively larger artificial-membrane-replacement sections so as to preserve the same replacement-rate factor (and correspondingly the same degree of graduality) that exists in normative metabolic replacement through the process of gradual replacement on the comparatively larger scale of the artificial-membrane sections.

The use of the time-to-scale factor corresponding with normative molecular turnover or “metabolic replacement” follows from the fact that we know subjective-continuity through substrate replacement is successful at this time-to-scale ratio. However, the lack of a non-arbitrarily quantifiable measure of time and the fact that that time is infinitely divisible (i.e., it can be broken down into smaller intervals to an arbitrarily large degree) logically necessitates that the salient variable is not time, but rather causal interaction between co-affective or “causally coupled” components. Interaction between components and the state transitions each component or procedural step undergo are the only viable quantifiable measures of time. Thus, while time is the relevant variable in the above equation, a better (i.e., more methodologically rigorous) variable would be a measure of either (a) the number of causal interactions occurring between co-affective or “adjacent” components within the interval of replacement time Ta, which is synonymous with the frequency of causal interaction; or (b) the number of state-transitions a given component undergoes within the interval of time Ta. While they should be generally correlative, in that state-transitions are facilitated via causal interaction among components, state-transitions may be a better metric because they allow us to quantitatively compare categorically dissimilar types of causal interaction that otherwise couldn’t be summed into a single variable or measure. For example, if one type of molecular interaction has a greater effect on the state-transitions of either component involved (i.e., facilitates a comparatively greater state-transition) than does another type of molecular interaction, then quantifying a measure of causal interactions may be less accurate than quantifying a measure of the magnitude of state-transitions.

In this way the rate of gradual replacement, despite being on a scale larger than normative metabolic replacement, would hypothetically follow the same degree of graduality with which biological metabolic replacement occurs. This was meant to increase the likelihood of subjective-continuity through a substrate-replacement procedure (both because it is necessarily more gradual than gradual replacement of whole individual neurons at a time, and because it preserves the degree of graduality that exists through the normative metabolic replacement that we already undergo).

Replicating Neuronal Membrane and Integral Membrane Components

Thus far there have been 2 main classes of neuron-replication approach identified: informational-functionalist and physical-functionalist, the former corresponding to computational and simulation/emulation approaches and the latter to physically embodied, “prosthetic” approaches.

The physicalist-functionalist approach, however, can at this point be further sub-divided into two sub-classes. The first can be called “cyber-physicalist-functionalist”, which involves controlling the artificial ion-channels and receptor-channels via normative computation (i.e., an internal CPU or controller-circuit) operatively connected to sensors and to the electromechanical actuators and components of the ion and receptor channels (i.e., sensing the presence of an electrochemical gradient or difference in electrochemical potential [equivalent to relative ionic concentration] between the respective sides of a neuronal membrane, and activating the actuators of the artificial channels to either open or remain closed, based upon programmed rules). This sub-class is an example of a cyber-physical system, which designates any system with a high level of connection or interaction between its physical and computational components, itself a class of technology that grew out of embedded systems, which designates any system using embedded computational technology and includes many electronic devices and appliances.

This is one further functional step removed from the second approach, which I was then simply calling the “direct” method, but which would be more accurately called the passive-physicalist-functionalist approach. Electronic systems are differentiated from electric systems by being active (i.e., performing computation or more generally signal-processing), whereas electric systems are passive and aren’t meant to transform (i.e., process) incoming signals (though any computational system’s individual components must at some level be comprised of electric, passive components). Whereas the cyber-physicalist-functionalist sub-class has computational technology controlling its processes, the passive-physicalist-functionalist approach has components emergently constituting a computational device. This consisted of providing the artificial ion-channels with a means of opening in the presence of a given electric potential difference (i.e., voltage) and the receptor-channels with a means of opening in response to the unique attributes of the neurotransmitter it corresponds to (such as chemical bonding as in ligand-based receptors, or alternatively in response to its electrical properties in the same manner – i.e., according to the same operational-modality – as the artificial ion channels), without a CPU correlating the presence of an attribute measured by sensors with the corresponding electromechanical behavior of the membrane needing to be replicated in response thereto. Such passive systems differ from computation in that they only require feedback between components, wherein a system of mechanical, electrical, or electromechanical components is operatively connected so as to produce specific system-states or processes in response to the presence of specific sensed system-states of its environment or itself. An example of this in regards to the present case would be constructing an ionic channel from piezoelectric materials, such that the presence of a certain electrochemical potential induces internal mechanical strain in the material; the spacing, dimensions and quantity of segments would be designed so as to either close or open, respectively, as a single unit when eliciting internal mechanical strain in response to one electrochemical potential while remaining unresponsive (or insufficiently responsive—i.e., not opening all the way) to another electrochemical potential. Biological neurons work in a similarly passive way, in which systems are organized to exhibit specific responses to specific stimuli in basic stimulus-response causal sequences by virtue of their own properties rather than by external control of individual components via CPU.

However, I found the cyber-physicalist approach preferable if it proved to be sufficient due to the ability to reprogram computational systems, which isn’t possible in passive systems without necessitating a reorganization of the component—which itself necessitates an increase in the required technological infrastructure, thereby increasing cost and design-requirements. This limit on reprogramming also imposes a limit on our ability to modify and modulate the operation of the NRUs (which will be necessary to retain the function of neural plasticity—presumably a prerequisite for experiential subjectivity and memory). The cyber-physicalist approach also seemed preferable due to a larger degree of variability in its operation: it would be easier to operatively connect electromechanical membrane components (e.g., ionic channels, ion pumps) to a CPU, and through the CPU to sensors, programming it to elicit a specific sequence of ionic-channel opening and closing in response to specific sensor-states, than it would be to design artificial ionic channels to respond directly to the presence of an electric potential with sufficient precision and accuracy.

In the cyber-physicalist-functionalist approach the membrane material is constructed so as to be (a) electrically insulative, while (b) remaining thin enough to act as a capacitor via the electric potential differential (which is synonymous with voltage) between the two sides of the membrane.

The ion-channel replacement units consisted of electromechanical pores that open for a fixed amount of time in the presence of an ion gradient (a difference in electric potential between the two sides of the membrane); this was to be accomplished electromechanically via a means of sensing membrane depolarization (such as through the use of reference electrodes) connected to a microcircuit (or nanocircuit, hereafter referred to as a CPU) programmed to open the electromechanical ion-channels for a length of time corresponding to the rate of normative biological repolarization (i.e., the time it takes to restore the membrane polarization to the resting-membrane-potential following an action-potential), thus allowing the influx of ions at a rate equal to the biological ion-channels. Likewise sections of the pre-synaptic membrane were to be replaced by a section of inorganic membrane containing units that sense the presence of the neurotransmitter corresponding to the receptor being replaced, which were to be connected to a microcircuit programmed to elicit specific changes (i.e., increase or decrease in ionic permeability, such as through increasing or decreasing the diameter of ion-channels—e.g., through an increase or decrease in electric stimulation of piezoelectric crystals, as described above—or an increase or decrease in the number of open channels) corresponding to the change in postsynaptic potential in the biological membrane resulting from postsynaptic receptor-binding. This requires a bit more technological infrastructure than I anticipated the ion-channels requiring.

While the accurate and active detection of particular types and relative quantities of neurotransmitters is normally ligand-gated, we have a variety of potential, mutually exclusive approaches. For ligand-based receptors, sensing the presence and steepness of electrochemical gradients may not suffice. However, we don’t necessarily have to use ligand-receptor fitting to replicate the functionality of ligand-based receptors. If there is a difference in the charge (i.e., valence) between the neurotransmitter needing to be detected and other neurotransmitters, and the degree of that difference is detectable given the precision of our sensing technologies, then a means of sensing a specific charge may prove sufficient. I developed an alternate method for ligand-based receptor fitting in the event that sensing-electric charge proved insufficient, however. Different chemicals (e.g., neurotransmitters, but also potentially electrolyte solutions) have different volume-to-weight ratios. We equip the artificial-membrane sections with an empty compartment capable of measuring the weight of its contents. Since the volume of the container is already known, this would allow us to identify specific neurotransmitters (or other relevant molecules and compounds) based on their unique weight-to-volume ratio. By operatively connecting the unit’s CPU to this sensor, we can program specific operations (i.e., receptor opens allowing entry for fixed amount of time, or remains closed) in response to the detection of specific neurotransmitters. Though it is unlikely to be necessitated, this method could also work for the detection of specific ions, and thus could work as the operating mechanism underlying the artificial ion-channels as well—though this would probably require higher-precision volume-to-weight comparison than is required for neurotransmitters.

Sectional Integration with Biological Neurons

Integrating replacement-membrane sections with adjacent sections of the existing lipid bilayer membrane becomes a lot less problematic if the scale at which the membrane sections are handled (determined by the size of the replacement membrane sections) is homogenous, as in the case of biological tissues, rather than molecularly heterogeneous—that is, if we are affixing the edges to a biological tissue, rather than to complexes of individual lipid molecules. Reasons for hypothesizing a higher probability for homogeneity at the replacement scale include (a) the ability of experimenters and medical researchers to puncture the neuronal membrane with a micropipette (so as to measure membrane voltage) without rupturing the membrane beyond functionality, and (b) the fact that sodium and potassium ions do not leak through the gaps between the individual bilipid molecules, which would be present if it were heterogeneous at this scale. If we find homogeneity at the scale of sectional replacement, we can use more normative means of affixing the edges of the replacement membrane section with the existing lipid bilayer membrane, such as micromechanical fasteners, adhesive, or fusing via heating or energizing. However, I also developed an approach applicable if the scale of sectional replacement was found to be molecular and thus heterogeneous. We find an intermediate chemical that stably bonds to both the bilipid molecules constituting the membrane and the molecules or compounds constituting the artificial membrane section. Note that if the molecules or compounds constituting either must be energized so as to put them in an abnormal (i.e., unstable) energy state to make them susceptible to bonding, this is fine so long as the energies don’t reach levels damaging to the biological cell (or if such energies could be absorbed prior to impinging upon or otherwise damaging the biological cell). If such an intermediate molecule or compound cannot be found, a second intermediate chemical that stably bonds with two alternate and secondary intermediate molecules (which themselves bond to either the biological membrane or the non-biological membrane section, respectively) can be used. The chances of finding a sequence of chemicals that stably bond (i.e., a given chemical forms stable bonds with the preceding and succeeding chemicals in the sequence) increases in proportion to the number of intermediate chemicals used. Note that it might be possible to apply constant external energization to certain molecules so as to force them to bond in the case that a stable bond cannot be formed, but this would probably be economically prohibitive and potentially dangerous, depending on the levels of energy and energization-precision.

I also worked on the means of constructing and integrating these components in vivo, using MEMS or NEMS. Most of the developments in this regard are described in the next chapter. However, some specific variations on construction procedure were necessitated by the sectional-integration procedure, which I will comment on here. The integration unit would position itself above the membrane section. Using the data acquired by the neuron data-measurement units, which specify the constituents of a given membrane section and assign it a number corresponding to a type of artificial-membrane section in the integration unit’s section-inventory (essentially a store of stacked artificial-membrane-sections). A means of disconnecting a section of lipid bilayer membrane from the biological neuron is depressed. This could be a hollow rectangular compartment with edges that sever the lipid bilayer membrane via force (e.g., edges terminate in blades), energy (e.g., edges terminate in heat elements), or chemical corrosion (e.g., edges coated with or secrete a corrosive substance). The detached section of lipid bilayer membrane is then lifted out and compacted, to be drawn into a separate compartment for storing waste organic materials. The artificial-membrane section is subsequently transported down through the same compartment. Since it is perpendicular to the face of the container, moving the section down through the compartment should force the intra-cellular fluid (which would have presumably leaked into the constructional container’s internal area when the lipid bilayer membrane-section was removed) back into the cell. Once the artificial-membrane section is in place, the preferred integration method is applied.

Sub-neuronal (i.e., sectional) replacement also necessitates that any dynamic patterns of polarization (e.g., an action potential) are continuated during the interval of time between section removal and artificial-section integration. This was to be achieved by chemical sensors (that detect membrane depolarization) operatively connected to actuators that manipulate ionic concentration on the other side of the membrane gap via the release or uptake of ions from biochemical inventories so as to induce membrane depolarization on the opposite side of the membrane gap at the right time. Such techniques as partially freezing the cell so as to slow the rate of membrane depolarization and/or the propagation velocity of action potentials were also considered.

The next chapter describes my continued work in 2008, focusing on (a) the design requirements for replicating the neural plasticity necessary for memory and subjectivity, (b) the active and conscious modulation and modification of neural operation, (c) wireless synaptic transmission, (d) on ways to integrate new neural networks (i.e., mental amplification and augmentation) without disrupting the operation of existing neural networks and regions, and (e) a gradual transition from or intermediary phase between the physical (i.e., prosthetic) approach and the informational (i.e., computational, or mind-uploading proper) approach.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Churchland, P. S. (1989). Neurophilosophy: Toward a Unified Science of the Mind/Brain.  MIT Press, p. 30.

Pribram, K. H. (1971). Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology. New York: Prentice Hall/Brandon House.

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

 

The Moral Imperative and Technical Feasibility of Defeating Death – Article by Franco Cortese

The Moral Imperative and Technical Feasibility of Defeating Death – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 5, 2013
******************************

Consume my heart away; sick with desire
And fastened to a dying animal
It knows not what it is; and gather me
Into the artifice of eternity.
Once out of nature I shall never take
My bodily form from any natural thing,
But such a form as Grecian goldsmiths make
Of hammered gold and gold enameling
To keep a drowsy Emperor awake;
Or set upon a golden bough to sing
To lords and ladies of Byzantium
Of what is past, or passing, or to come.

~ W. B. Yeats

The original is unfaithful to the translation.

~ Jorge Luis Borges

“Whatever can be repaired gradually without destroying the original whole is, like the vestal fire, potentially eternal.

~ Francis Bacon, A History of Life and Death, 1638

I became both Immortalist and Transhumanist long before I knew such designations existed. In 2006, at age 14, I conceived of both the extreme desirability and technical feasibility of ending death, without any knowledge of the proposals for immortality already extant. I thought I was the only one in the world who saw both the utter, belligerent waste of death, and our ability to technologically defeat it. I was dumbfounded that humanity wasn’t attacking the problem like any other preventable source of widespread suffering. I saw that the end of death was not only desirable but a moral imperative.
***

If we have the power to make it happen, or have even a chance at doing so, yet fail to even try for reasons of inertia, incredulity, or indifference, then we are condemning massive amounts of real people to unnecessary death by our inaction. I felt a moral obligation to work on conceptual development of the various pragmatic aspects required  to physically realize indefinite longevity until I was old enough to physically put these developments into practice – i.e., do experiments and design physical systems. I worked on my grand project, as I thought of it, from August 2006 until May 2010, at which time I discovered multiple other approaches to indefinite longevity being actively developed (initially through Kurzweil’s The Singularity is Near), and even multiple antecedents of my own approach, I felt less of an imperative to continue active conceptual development on these procedures. I was happy to find the existing Immortalist movement, of course; I stopped not out of resentment for having been anteceded, but rather out of newfound assurance that the defeat of death didn’t lay solely in my hands.

I had worked for 4 years on conceptual designs and approaches to indefinite life extension – designs that I was planning on building and experimentally verifying in my young adulthood, whether through normative medical research and academia or through a privately funded venture, thinking that I would have more of a success than if I came to the world as a teenager with these ideas, as they were. By 2010, 4 years into the project, I discovered that others were seeking the defeat of death through technological intervention as well, and that many of the specific ideas I had come up with were already out and in the world.

My original approach involved transplanting the organic brain into a full cybernetic body. Over the next few months I collected research on experiments in organic brain transplantation done with salamanders, dogs, and monkeys , on maintaining the brain’s homeostatic and regulatory mechanisms outside the body and on a host of prosthetic and robotic technologies which I saw as developmentally converging to allow the creation of a fully cybernetic body. I soon realized that this approach was problematic; while the brain typically dies as a consequence of its homeostatic and regulatory mechanisms (i.e. heart and lungs failing), it would still fall prey to cell death if it remained organic, even if such regulatory mechanisms were maintained technologically.

This obstacle led to my conceiving the essential gestalt of uploading – the gradual replacement of neurons with functional equivalents that preserve each original neuron’s relative location and connection – three months later. Although my original approach was prosthetic (i.e., physically embodied functional equivalents of neurons), I eventually saw computational models as being preferable for their comparatively higher speed and ease of modification and/or modulation.

I discovered that Brain-Emulation and Connectomics (or Mind-Uploading more informally) was an existing discipline not long after conceiving of the idea, but at the time thought that various aspects required for gradual replacement (and thus for real immortality, and not the creation of an immortal double, were undeveloped in regard to how the computational models would communicate and maintain functional equilibrium with the existing biological neurons. If we seek to replace biological neurons with artificial equivalents, once we have a simulation of a given neuron in a computer outside the body, how is that simulated neuron to communicate with the biological neurons still inside that biological body, and vice versa? My solution was the use of initially MEMS (micro-electro-mechanical systems) but later NEMS (nano-electro-mechanical-systems) to detect biophysical properties via sensors and translate them into computational inputs, and likewise to translate computational output into biophysical properties via electrical actuators and the programmed release of chemical stores (essentially stored quantities of indexed chemicals to be released upon command). While the computational hardware could hypothetically be located outside the body, communicating wirelessly to corresponding in-vivo sensors and actuators, I saw the replacement of neurons with enclosed in-vivo computational hardware in direct operative connection with its corresponding sensors and actuators as preferable. I didn’t realize until 2010 that this approach—the use of NEMS to computationally model the neurons, to integrate (i.e., construct and place) the artificial neurons and translate to biophysical signals into computational signals and vice versa—was already suggested by Kurzweil and conceptually developed more formally by Robert Freitas, and when I did, I felt that I didn’t really have much to present that hadn’t already been conceived and developed.

However, since then I’ve come to realize some significant distinctions between my approach and Brain-Emulation, and that besides being an interesting story that helps validate the naturality of Immortalism’s premises (that indefinite longevity is a physically realizable state, and thus technologically realizable –  and what can be considered the “strong Immortalist” claim: that providing people the choice of indefinite longevity if it were realizable is a moral imperative), I had several novel notions and conceptions which might prove useful to the larger community working and thinking on these topics.

While this project began as a means of indefinite longevity, it took on Transhumanist concerns within days of its conception. A cybernetic body not only frees one from the strictures of death, but also from the limitations of a static body designed for a static environment. Freed from our flesh, we could comfortably bear any extremes of Earth or beyond; interchange our bodily designs with the nonchalance of attire; and continuously, on a daily basis, take charge of what it means for us to be. I envisioned extreme phenotypic diversity as undermining racism and prejudices, an explosion of intelligence and happiness consequent of finally taking the stuff of our being into our own hands, the newfound availability of heretofore unrealized modalities of being, experience, thought, morality, and abilities realized through the technological extension and enhancement of the mind.

By 2007, I was calling this philosophy “Enhancism”, which I designated as the thesis that enhancement is the principal underlying both human nature and evolutionary nature. Regardless of what constitutes an “enhancement”, the fact that we strive to reach idealized objectives and grow toward what we envision as better versions of our selves and our world exemplifies enhancement as the underlying driver and primal force that makes up Mind, Man, and Humanity. The objective or “optimization target” isn’t important – what is important is the act of designating an objective as better, and then striving in a fit of fiery thrusts toward it.

I never saw this imperative of improving ourselves using all available means as a move away from humanity, but rather as a natural extension and continuation of what has always best designated us as human. I realized that self-directed modification of both body and mind were not only both possible and desirable, but a natural extension of what humanity has been doing since long before the very concept of “humanity” existed. I had arrived at the essential premises and conclusions of both Immortalism and Transhumanism without exposure to existing forms of either. Indeed, this was even before I started reading science fiction!

I think this observation undermines what I feel to be a common misconception of outside of Transhumanist circles – that Transhumanism and Immortalism are fringe movements for statistical outliers with idiosyncratic interests. I think that this rather adds credence to rebuttal that Transhumanism and Immortalism exemplify the modern embodiment of all we’ve ever been; that they are not founded upon grandiose and overly contingent axioms, but rather on the respective premises that life is good and so should be extended for as long as possible and that we are more likely to create a better world and better selves than we are to find them already given.

If the underlying logic behind Immortalism and Transhumanism can be independently arrived at by a 14-year-old without any knowledge of historical or extant forms of either, then how removed from the human concerns of the majority can they really be? If they relied on a host of contingent hopes and deviant memetic baggage – if their claims or conclusions were overly complicated in any way – how could they be arrived at so readily and fluidly by an adolescent?

I also unwittingly recapitulated many specific Transhumanist objectives throughout the course of my “grand project”, as I had thought of it at the time. My approach of gradually replacing the neurons in the brain with functional equivalents would necessitate control over the processes exhibited by the replacements. This would allow us to actively and consciously control the variables and metrics determining neuronal behavior, not only modifying ourselves through the integration of additional NRUs (neuron-replication-units) or NRU-networks, but also through active modification and real-time modulation of the NRUs that would by then underlie our existing mental and experiential modalities, having replaced our existing biological neurons.

Within the first year of the project, I had conceived of using these new capabilities to make ourselves smarter (an unwitting recapitulation of intelligence-amplification), of making ourselves more ethical (an unwitting recapitulation of moral engineering, explored by such thinkers as James Hughes, Julian Savulescu and Asher Seidel, among others), and of actively making ourselves happier, or rather of eliminating those normative biological aspects that bias us needlessly towards unhappiness (an unwitting variant of David Pearce’s hedonistic imperative), and the exchange of real-time perception and memory deeper and of higher fidelity than sensory memories, essentially extending to thoughts, emotions, and indeed all experiential modalities available to us.

One could imagine my surprise upon finding Transhumanism and Immortalism as existing disciplines and movements; I felt as though I had borne a son and gone away for a day only to return and find him grown up – and that I was never his biological father to begin with.

The fact that both Transhumanist (i.e., enhancement, self-modification and self-modulation) and Immortalist concerns and conceptions developed concurrently throughout my work also reifies their having a shared gestalt. While they are not mutually inclusive (you can be one without being the other), they do share some strong similarities. They both eschew biological and naturalistic limitations, exalt autonomy and the provision of rights, and spring from a legitimate glorification of life and self.

The last point I would like to make here is one that I think helps subvert the superficial claim that Transhumanist or Immortalist objectives are essentially selfish concerns. At 14 I had no personal stake in trying to end death as fast as possible; both ending death and increasing our ability to better determine who we are and what we can do were from day one for the world and for broader humanity – particularly for those who didn’t have the majority of the rest of their lives to live: the 100,000 people who succumb to bitter finitude each day. I think most other Transhumanist and Immortalist thinkers would agree that any positive future involves broad access to both longevity treatments and to the latest means of improving and realizing ourselves.

None of these naïve misinterpretations are real concerns to Transhumanist and Immortalist communities, except in regards to the degree with which they prevent people from digging deep enough to discover their stark insubstantiality. While they may be so off-base as to make their fallaciousness readily obvious to members of either community, and thus a seeming non-issue, I think the way in which they engender public misconceptions about Transhumanism and Immortalism validates our need to dispel them. Transhumanism is the only humanism; it exemplifies the very heart of what makes us human. The “trans” and the “human” in Transhumanism can only signify each other, for to be human is to strive to become more than human. I’ve thought this from the beginning, and this is a direction that my thinking – while having developed significantly since the practical work described here – is still oriented toward.

I wonder how many others there are out there like me, yet to approach the world with their vast extrapersonal visions of self-directed self-realization, yet to find the daring to throw their raucous good works in the face of this world that deserves better than to simply die quietly and unquestioningly, without revolt; others who, like me, saw that to try and change the world for the better is the very namesake of Man; who’ve crafted star-spangled dreams as large and as belligerently righteous as ending death and taking definite control of our ever-indefinite and indefinitive selves.

To every riled child who has ever had a vision larger than himself but that he has been too afraid to reveal, who has ever dreamt of bounding past the boundaries of present and toward the real prize, who has ever felt a dire need to make Man more than he is: I call thee out of the whorlworks and into the world! Come, show us what you’ve done!

What follows in my subsequent essays is first a broad overview of my work in this area from 2006 to 2010 (at which time I had discovered enough Immortalist antecedents to stop actively working on conceptual varieties of techno-immortality), first in terms of my methodology for achieving indefinite longevity (i.e., my work in uploading or brain-emulation proper), and then in terms of the enhancement and modification side, focusing on similarities and differences between my vision and those developed in Transhumanism and Immortalism.

While this essay is largely personal and introductory, I think the fact of my independently arriving at many of the conceptual premises and conclusions of Transhumanism and Immortalism, and under different terms, also reifies the more substantial claim that Transhumanism isn’t as far-out as is normatively presumed—or perhaps rather that the “human” isn’t as right-here as is commonly supposed. For that curious creature of clamorous self-determination called Man is most familiar with unfamiliarity, and most at home in alien dendritic jungles, for having gone so far out as to come back around again.

While in 2010 I thought most of my ideas in regards to practical approaches to immortality as already conceived, I now see some differences between my approach and other conceptions of brain-emulation. One is the conceptual development of physical/prosthetic approaches to neuron replication and replacement (i.e., prosthetics on the cellular scale) in addition to strictly computational approaches. Another is several novel approaches to preserving both immediate subjective-continuity (that is, the ability to have subjective experience, sometimes called sentience – as opposed to sapience, which denotes our higher cognitive capacities like abstract thinking, thus humans have sentience and sapience while most non-mammals are thought to lack sapience but possess sentience) and temporal subjective-continuity (the property of feeling like the same subjective person as you did yesterday, or a week ago, or 10 years ago – despite the fact that all of the molecules constituting your brain are gone, having been replaced with identical molecules through metabolism – via molecular turnover rather than full-cell replacement – over the course of a seven-year period) through a gradual (neuron) replacement procedure that are to my knowledge yet to be explored by the wider techno-immortalist community and brain-emulation discipline, respectively.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

(June 2012). International Journal of Machine Consciousness, 4 (1).

Browne, M. W. (2011). From science fiction to science: ‘the whole body transplant’. Retrieved February 28, 2013 from http://www.nytimes.com/1998/05/05/science/essay-from-science-fiction-to-science-the-whole-body-transplant.html

Demikhov, V. P. & (1962).Experimental transplantation of vital organs. Basil Haigh, transl. New York: Consultant’s Bureau Enterprises, Inc.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013 from http://computer.howstuffworks.com/brain-computer-interface.htm

Hickey, L. P. (2011). The brain in a vat argument. Internet encyclopedia of philosophy, 2011. Retrieved February 28, 2013 from http://www.iep.utm.edu/brainvat/

Kurzweil, R. (2005). The Singularity is Near. Penguin Books, p. 63-67.

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011 .Retrieved February 28, 2013 from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf (URL).

Narayan, A. (2004). Computational Methods for NEMS.Retrieved February 28, 2013 from http://nanohub.org/resources/407.

Pietsch, P. & Schneider, C. W. (1969). Brain transplantation in salamanders: an approach to memory transfer . Brain Research, Aug;14 (3), 705-715. PMID: 5822440

Stoney, W. S. (1962). Evolution of cardiopulmonary bypass. Experimental transplantation of vital organs. Circulation, 2009 (119), 2844-53.

Vagaš, M. (2012). To view the current state of robotic technologies. Advanced Materials Research. Circulation, 2012 , 436-464, 1711.

What is MEMS Technology? (2011). Retrieved February 28, 2013 from https://www.memsnet.org/about/what-is.html

4 in 5 of Americans Don’t Think Death Exists? – Article by Franco Cortese

4 in 5 of Americans Don’t Think Death Exists? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 4, 2013
******************************

“Our hope of immortality does not come from any religions, but nearly all religions come from that hope.” ~ Robert Green Ingersoll

Recent polls indicate that 80% of Americans and over 50% of global citizens believe in an afterlife. I argue that conceptions of death which include or allow for the possibility of an afterlife are not only sufficiently different from conceptions of death devoid of an afterlife as to necessitate that they be given their own term and separate designation, but that such afterlife-inclusive notions of death constitute the very antithesis of afterlife-devoid conceptions of death! Not only are they sufficiently different as to warrant their own separate designations, but afterlife-inclusive conceptions of death miss the very point of death – its sole defining attribute or categorical qualifier as such. The defining characteristic is not its specific details (e.g.,  whether physical death counts as death if the mind isn’t physical, as in substance dualism); its defining characteristic is the absence of life and subjectivity. Belief in an afterlife is not only categorically dissimilar but actually antithetical to conceptions of death precluding an afterlife. Thus to believe in heaven is to deny the existence of death!

The fact that their belief involves metaphysical, rather than physical, continuation isn’t a valid counter-argument. To argue via mind-body dualism that the mind is metaphysical, and thus will continue on in a metaphysical realm (i.e. heaven), in this specific case makes no difference. Despite the mind not being physical in such an argument, its relation to the metaphysical realm is the same as the relation of physical objects to the physical realm. It operates according to the “rules” and “causal laws” of the metaphysical realm, and so for all effective purposes can be considered physical in relation thereto, in the same sense that physical objects can be considered physical in relation to physical reality.

The impact of this categorical confusion extends beyond desire for semantic precision. If we hope to convince the larger public of radical life extension’s desirability, we need to first convince them that death exists. If one believes that one’s mind will continue on after physical death, then the potential attraction of physical immortality becomes negligible if not null. Why bother expending effort to attain immortality if it is inherent in the laws of the universe? It becomes a matter of not life or death but of convenience. This is a major problem: if the statistics mentioned can be trusted, then over half of the world population, and over 4/5ths of the USA, lack even the potential to see the attraction and advantage of life extension!

Widespread public awareness of and desire for radical longevity are important, because they are our best tools for achieving it. One promoter is more effective – that is, has more of an impact on how soon indefinite longevity is realized – than one researcher working on life extension. One promoter can get his or her message to scores of people per day. Conversely, many researchers have little say on what they want to work on, or the scope and uses for what they work on. One must be conservative to get research grants, and the research directions taken in any science discipline are more influenced by public opinion than the opinion of individual researchers. We can get more traction by influencing public opinion, per unit of time or effort (damn these unquantifiable metrics!), than with pragmatic research. If we get widespread support, then funding for research will come.

The preponderance of atheists in the Transhumanist community is not a coincidence. Only through godlessness can each become his own god – in which case god-as-superior-being becomes meaningless, and god-as-control-of-own-fate, god-as-self-empowerment and god-as-self-legitimation, self-signification, and self-dignification are the only valid definitions for such a term that remain. Autotheism encompasses atheism because it requires it (with the possible exception of co-creator theologies). Atheism is still to be valorized and commended in my opinion, for it exemplifies the resolute acceptance of freedom and ultimate responsibility for what we are and are to become. To be an atheist un-paralyzed by fear is to take for granted the desirability of one’s own freedom and lawless godfullness. On the other hand, successful intersections of religious thinking and Transhumanism do exist, as exemplified by the Mormon Transhumanist Association – whose success lies, I think, in its emphasis on co-creator theology (Mormons believe that it is Man’s responsibility to “grow up” into God – and if man and god are on equal footing, then where lie the dog, titan, and grandFather?). Thus while belief in heaven and, by consequence, all religions that include or allow for conceptions of an afterlife constitute a massive deterrent to the widespread popularity of immortalism, they also constitute, in utmost irony, some of its greatest potential legitimators due to their potential to evidence immortality as a deep-rooted human desire that transcends cultural distance and historical time.

Thus we should neither be precisely denouncing nor promoting religion, yet neither should we ignore it and simply let it be. Rather we should be a.) heralding religious adherents for their keen insight into the true values and desires of humanity, while b.) taking care to show them that life extension is nothing less than the modern embodiment of the very immortalist gestalt that they exemplified via conceptualizing an afterlife in the first place, and that belief in heaven held or maintained today goes against the very motivation and underlying utility that such a belief was trying to maintain and instill all along! By believing in heaven, they are going against all it was ever meant achieve (the temporary satisfaction of our insatiable urge for life and escape from petty death) and all it was ever meant to constitute. This is not only the truest state of affairs, but the most advantageous as well. It allows us to at once ameliorate the problems caused by widespread belief in heaven, utilize the widespread and long-running belief in afterlife for the purpose of legitimizing immortalism to the wider and more conservative public, and show the long historical tradition of a belief in or longing for immortality to constitute perhaps the most deep-rooted human value, desire and ideal (in both terms of historical time and in terms of importance, or a measure of how much it shapes our values, desires, and ideals), while at the same time avoid irremediably insulting people who believe in an afterlife  – which is detrimental only insofar as it risks having them ignore our cause not from reasoned conclusion but rather from seasoned spite.

We should consider two options. The first is to convince them that contemporary belief in heaven must be laid down, because its contemporary utility actually works against the original utility of a belief in heaven, as described above. A second option, which I think is less favorable but may be met with less ideological opposition, is that physical immortality constitutes the new embodiment of heaven on earth. Religious institutions like the like the Roman Catholic Church have, through the Vatican in this case, reformed their doctrine on evolution. Might the eschatological occurrences in the Book of Relevation be interpreted as the culminating intersection of the realm of Heaven with the realm of Earth? Might we try and incite them to change their doctrine on the afterlife, removing all metaphysical connotations due to society’s increasing secularization and the growing popularity of scientific materialism (also called metaphysical or methodological naturalism)? The change in doctrine over evolution, which the Catholic Church did presumably due to the large popularity of belief in evolution and the Church’s desire not to alienate so large a demographic, may be a precedent. Thus we should consider suggesting that the Church reinterpret its vision of Heaven as a continuing physical realization of the perfect society on Earth.

We should be portraying every religious crusade and mission to spread the word of god as a pilgrimage to bring immortality to the world! If one thinks that a specific moral, metaphysical, or cosmological (i.e., religious) system is required to attain life after death, what else is the pilgrimage to spread god’s word but a quest to bring methodological means of immortality to humanity? Let us at once show believers in an afterlife why they are wrong, commend them for their insight into deep-rooted and historically extensive human values, beliefs, and eternal longings, and win them over to our side!

We have been hurling our rank rage at death and staunch demand for life at the unyielding heavens since before the recognized inception of culture! From the first dawn in Sumer and on, extending across the Abrahamic tradition to touch upon Hinduism and the Chinese Faith, from Egyptian religion (with its particularly strong emphasis on the afterlife) to Norse mythology and beyond. Even Buddhism, which is often considered more philosophy than religion for its lack of a dogmatic stance on cosmology and an afterlife, has its versions of eternal life. Reincarnation is just as much a validating force for our desire for immortality as belief in an afterlife is. Reincarnation holds that non-metaphysical, physically embodied immortality, through cyclic rebirth, is possible (and while metaphysics is involved, the belief nonetheless reifies the concept or corporeal rebirth). And indeed, even though reincarnated forms precede Nirvana and are still located within the “illusory” realm of Samsara, this only goes to further emphasize the predominance of physical forms of radical longevity, the desire for and belief in which both reincarnation and the Buddhist versions of “heaven” exemplify. According to the Anguttara Nikaya (a Buddhist text), there are several types of heaven in existence, all part of the physical realm, the inhabitants or “denizens” of which have varying degrees of longevity. The denizens of Cātummaharajan live 9,216,000,000 years; denizens of Nimmānarati live 2,284,000,000 years; denizens of Tāvatimsa live 36,000,000 years; denizens of Tusita live 576,000,000 years; and the denizens of Yāma live 1,444,000,000 years.

Our history overflows with humanity’s upheaved herald of heaven, our exaltation of the existential extra, our fiery strife towards continued life. The mythic and religious historical traditions constitute at once indefinite longevity’s greatest contemporary obstacle and its greatest historical legitimator.

“There can be but little liberty on earth while men worship a tyrant in heaven.” ~ Robert Green Ingersoll

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

References:

  1. Belief of Americans in God, heaven and hell, 2011 (2011). Retrieved March 22, 2013 from http://www.statista.com/statistics/245496/belief-of-americans-in-god-heaven-and-hell/
  2. Poll; nearly 8 in 10 Americans believe in angels (2011). CBS News. Retrieved March 22, 2013 from http://www.cbsnews.com/8301-201_162-57347634/poll-nearly-8-in-10-americans-believe-in-angels/
  3. Conan, N. (2010). Do You Believe In Miracles? Most Americans Do. In NPR News. Retrieved March 22, 2013 from http://www.npr.org/templates/story/story.php?storyId=124007551
  4. Americans Describe Their Views About Life After Death (2003). The Barna Group. Retrieved March 22, 2013 from http://www.barna.org/barna-update/article/5-barna-update/128-americans-describe-their-views-about-life-after-death
  5. 43,941 adherent statistic citations: membership and geography data for 4,300+ religions, churches, tribes, etc. (2007). Retrieved March 22, 2013 from http://www.adherents.com/Na/Na_516.html
Morality Needs Immortality to Live – Article by Franco Cortese

Morality Needs Immortality to Live – Article by Franco Cortese

“In Order to be Go(o)d, We Can’t Die!” Says Kant

The New Renaissance Hat
Franco Cortese
May 2, 2013
******************************

Dead Immortalist Sequence –  #1: Immanuel Kant (1724-1804)

Kant is often misconstrued as advocating radical conformity amongst people, a common misconception drawn from his Categorical Imperative, which states that each should act as though the rules underlying his actions can be made a universal moral maxim. The extent of this universality, however, stops at the notion that each man should act as though the aspiration towards morality were a universal maxim. All Kant meant, I argue, was that each man should act as though the aspiration toward greater morality were able to be willed as a universal moral maxim.

This common misconception serves to illustrate another common and illegitimate portrayal of the Enlightenment tradition. Too often is the Enlightenment libeled for its failure to realize the ideal society. Too often is it characterized most essentially by its glorification of strict rationality, which engenders invalid connotations of stagnant, statuesque perfection – a connotation perhaps aided by the Enlightenment’s valorization of the scientific method, and its connotations of stringent and unvarying procedure and methodology in turn. This takes the prized heart of the Enlightenment tradition and flips it on its capsized ass. This conception of the Enlightenment tradition is not only wrong, but antithetical to the true organizing gestalt and prime impetus underlying the Age of the Enlightenment.

The Enlightenment wasn’t about realizing the perfect society but rather about idealizing the perfect society – the striving towards an ever-inactualized ideal which, once realized, would cease to be ideal for that very reason. The Enlightenment was about unending progress towards that ideal state – for both Man as society and man as singular splinter – of an infinite forward march towards perfection, which, upon definitively reaching perfection, will have failed to achieve its first-sought prize. The virtue of the Enlightenment lies in the virtual, and its perfection in the infinite perfectibility inherent in imperfection.

This truer, though admittedly less normative, interpretation of the Enlightenment tradition, taking into account its underlying motivations and projected utilities – rather than simply taking flittered glints from the fallacious surface and holding them up for solid, tangible truth – also serves to show the parallels between the Enlightenment gestalt and Transhumanism. James Hughes, for one, characterizes Transhumanism as a child of the Enlightenment Tradition [1].

One can see with intuitive lucidity that characterizing the Enlightenment’s valorization of rationality goes against the very underlying driver of that valorization. Rationality was exalted during the Age of Enlightenment for its potential to aid in skepticism toward tradition. Leave the chiseled and unmoving, petty perfection of the statue for the religious traditions the Enlightenment was rebelling against – the inviolable God with preordained plan, perfect for his completion and wielding total authority over the static substance of Man; give the Enlightenment rather the starmolten fire-afury and undulate aspiration toward ever-forth-becoming highers that it sprang from in the first place. The very aspects which cause us to characterize the enlightenment as limiting, rigid, and unmolten are those very ideals that, if never realized definitively – if instead made to form an ongoing indefinite infinity – would thereby characterize the Enlightenment tradition as a righteous roiling rebellion against limitation and rigour – as a flighty dive into the molten maelstrom of continuing mentation toward better and truer versions of ourselves and society that was its real underlying impetus from the beginning.

This truer gestalt of the Enlightenment impinges fittingly upon the present study. Kant is often considered one of the fathers of the Enlightenment. In a short essay entitled “What is the Enlightenment?” [2], Kant characterizes the essential archetype of Man (as seen through the lens of the Enlightenment) in a way wholly in opposition to the illegitimate conceptions of the Enlightenment described above – and in vehement agreement with the less-normative interpretation of the Enlightenment that followed. It is often assumed, much in line with such misconceptions, that the archetype of Man during the Age of the Enlightenment was characterized by rational rigour and scientific stringency. However, this archetype of the mindless, mechanical automaton was the antithesis of Man’s then-contemporary archetype; the automaton was considered rather the archetype of animality – which can be seen as antithetical to the Enlightenment’s take on Man’s essence, with its heady rationality and lofty grasping towards higher ideals. In his essay, Kant characterizes the Enlightenment’s archetype of Man as the rebellious schoolboy who cannot and shall not be disciplined into sordid subservience by his schoolmasters. Here Kant concurs gravely from beyond the grave that Man’s sole central and incessant essence is his ongoing self-dissent, his eschewing of perverse obligation, his disleashing the weathered tethers of limitation, and his ongoing battle with himself for his own self-creation.

It is this very notion of infinite progress towards endlessly perfectable states of projected perfection that, too, underlies his ties to Immortalism. Indeed, his claim that to retain morality we must have comprehensively unending lives – that is, we must never ever die – rests crucially on this premise.

In his Theory of Ethics [3] under “Part III: The Summum Bonum, God and Immortality” [4], Kant argues that his theory of ethics necessitates the immortality of the soul in order to remain valid according to the axioms it adheres to. This is nothing less than a legitimation of the desirability of personal immortality from a 1700’s-era philosophical rockstar. It is important to note that the aspects making it so crucial in concern to Kant’s ethical system have to do with immortality in general, and indeed would have been satisfied according to non-metaphysical (i.e. physical and technological) means – having more to do with the end of continued life and indefinite longevity or Superlongevity in particular, than with the particular means used to get there, which in his case is a metaphysical means. Karl Ameriks writes in reference to Kant here: “… the question of immortality is to be understood as being about a continued temporal existence of the mind. The question is not whether we belong to the realm beyond time but whether we will persist through all time…Kant also requires this state to involve personal identity.” [5]. While Kant did make some metaphysical claims tied to immortality – namely the association of degradation and deterioration with physicality, which when combined with the association of time with physicality may have led to his characterization of the noumenal realm (being the antithesis of the phenomenal realm) as timeless and free from causal determination – these claims are beyond the purview of this essay, and will only be touched upon briefly. What is important to take away is that the metaphysical and non-metaphysical justifications are equally suitable vehicles for Kant’s destination.

Note that any italics appearing within direct quotations are not my own and are recorded as they appeared in the original. All italics external to direct quotations are my own. In  the 4th Section, The immortality of the soul as a postulate of pure practical reason, of the 3rd Part of Theory of Ethics, Kant writes: “Pure  practical reason postulates the immortality of the soul, for reason in the pure and practical sense aims at the perfect good (Summum Bonum), and this perfect good is only possible on the supposition of the soul’s immortality.” [5]

Kant is claiming here that reason (in both senses with which they are taken into account in his system – that is, as pure reason and practical reason) is aimed at perfection, which he defines as continual progress towards the perfect good – rather than the attainment of any such state of perfection, and that as finite beings we can only achieve such perfect good through an unending striving towards it.

In a later section, “The Antinomy of Practical Reason (and its Critical Solution)” [6], he describes the Summum Bonum as “the supreme end of a will morally determined”. In an earlier section, The Concept of the Summum Bonum [7], Kant distinguishes between two possible meanings for Summum; it can mean supreme in the sense of absolute (not contingent on anything outside itself), and perfect (not being part to a larger whole). I take him to claim that it means both.

He also claims personal immortality is a necessary condition for the possibility of the perfect good. In the same section he describes the Summum Bonum as the combination of two distinct features: happiness and virtue (defining virtue as worthiness of being happy, and in this section synonymizing it with morality). Both happiness and virtue are analytic and thus derivable from empirical observation.

However, their combination in the Summum Bonum does not follow from either on its own and so must be synthetic, or reliant upon a priori cognitive principles, Kant reasons. I interpret this as Kant’s claiming that the possibility of the Summum Bonum requires God and the Immortality of the Soul because this is where Kant grounds his a priori, synthetic, noumenal world – i.e. the domain where those a priori principles exist (in/as the mind of God, for Kant).

Kant continues:

“It is the moral law which determines the will, and in this will the perfect harmony of the mind with the moral law is the supreme condition of the summum bonum… the perfect accordance of the will with the moral law is holiness, a perfection of which no rational being of the sensible world is capable at any moment of his existence. Since, nevertheless, it is required as practically necessary, it can only be found in a progress in infinitum towards that perfect accordance, and on the principles of pure practical reason is nonetheless necessary to assume such a practical progress as the real object of our will.” [8]

Thus not only does Kant argue for the necessitated personal immortality of the soul by virtue of the fact that perfection is unattainable while constrained by time, he argues along an alternate line of reasoning that such perfection is nonetheless necessary for our morality, happiness and virtue, and that we must thus therefore progress infinitely toward it without ever definitively reaching it if the Summum Bonum is to remain valid according to its own defining attributes and categorical-qualifiers as-such.

Kant decants:

“Now, this endless progress is only possible on the supposition of an endless duration of existence and personality of the same rational being (which is called the immorality of the soul). The Summum Bonum, then, practically is only possible on the supposition of the immortality of the soul; consequently this immortality, being inseparably connected with the moral law, is a postulate of pure practical reason (by which I mean a theoretical proposition, not demonstrable as such, but which is an inseparable result of an unconditional a priori practical law). This principle of the moral destination of our nature, namely, that it is only in an endless progress that we can attain perfect accordance with the moral law… For a rational but finite being, the only thing possible is an endless progress from the lower to higher degrees of moral perfection. In Infinite Being, to whom the condition of time is nothing… is to be found in a single intellectual intuition of the whole existence of rational beings. All that can be expected of the creature in respect of the hope of this participation would be the consciousness of his tried character, by which, from the progress he has hitherto made from the worse to the morally better, and the immutability of purpose which has thus become known to him, he may hope for a further unbroken continuance of the same, however long his existence may last, even beyond this life, and thus may hope, not indeed here, nor in any imaginable point of his future existence, but only in the endlessness of his duration (which God alone can survey) to be perfectly adequate to his will.” [9]

So, Kant first argues that the existence of the Summum Bonum requires the immorality of the soul both a.) because finite beings conditioned by time by definition cannot achieve the absolute perfection of the Summum Bonum, and can only embody it through perpetual progress towards it, and b.) because the components of the Summum Bonum (both of which must be co-present for it to qualify as such) are unitable only synthetically through a priori cognitive principals, which he has argued elsewhere must exist in a domain unconditioned by time (which is synonymous with his conception of the noumenal realm) and which must thus be perpetual for such an extraphysical realm to be considered unconditioned by time and thus noumenal. The first would correspond to Kant’s strict immortalist underpinnings, and the second to the alternate (though not necessarily contradictory) metaphysical justification alluded to earlier.

Once arguing that the possibility of the Summum Bonum requires personal immortality, he argues that our freedom/autonomy, which he locates as the will (and further locates the will as being determined by the moral law) also necessitates the Summum Bonum. This would correspond to his more embryonically Transhumanist inclinations. In the first section (“The Concept of Summum Bonum”) he writes, “It is a priori (morally) necessary to produce the summum bonum by freedom of will…” I interpret this statement in the following manner. He sees morality as a priori and synthetic, and the determining principle which allows us to cause in the world without being caused by it – i.e., for Kant our freedom (i.e., the quality of not being externally determined) requires the noumenal realm because otherwise we are trapped in the freedom-determinism paradox. Thus the Summum Bonum also vicariously necessitates the existence of God, because this is necessary for the existence of a noumenal realm unaffected by physical causation (note that Kant calls physicality ‘the sensible world’). Such a God could be (and indeed has been described by Kant in terms which would favor this interpretation) synonymous with the entire noumenal realm, with every mind forming but an atom as it were in the larger metaorganismal mind of a sort of meta-pantheistic, quasi-Spinozian conception of God – in other words, one quite dissimilar to the anthropomorphic connotations usually invoked by the word.

Others have drawn similar conclusions and made similar interpretations. Karl Ameriks summarizes Kant’s reasoning here thusly:

“All other discussions of immortality in the critical period are dominated by the moral argument that Kant sets out in the second critique. The argument is that morality obligates us to seek holiness (perfect virtue), which therefore must be possible, and can only be so if God grants us an endless afterlife in which we can continually progress… As a finite creature man is incapable of ever achieving holiness, but on – and only in – an endless time could we supposedly approximate to it (in the eyes of God) as fully as could be expected… Kant is saying not that real holiness is ever a human objective, but rather that complete striving for it can be, and this could constitute for man a state of ‘perfect virtue’…” [10]

The emphasis on indefinity is also present in the secondary literature; Ameriks remarks that Kant ”…makes clear that the ‘continual progress’ he speaks of can ultimately have a ‘non-temporal’ nature in that it is neither momentary nor of definitive duration nor actually endless”. Only through never quite reaching our perfected state can we retain the perfection of lawless flawedness.

Paul Guyer corroborates my claim that the determining factor is not the claim that mind is an extramaterial entity or substance, but because if morality requires infinite good and if we are finite beings, then we must be finite beings along an infinite stretch of time in order to satisfy the categorical requirements of possessing such an infinity. He writes that ”..the possibility of the perfection of our virtuous disposition requires our actual immortality…” [11] and that ”…God and immortality are conditions specifically of the possibility of the ultimate object of virtue, the highest good – immortality is the condition for the perfection of virtue and God that for the realization of happiness…[12]

In summary, it doesn’t matter that Kant’s platform was metaphysical rather than technological, because the salient point and determining factors were not the specific operation or underlying principles (or the “means”) used to achieve immortality, but rather the very ends themselves. Being able to both live and progress in(de)finitely was the loophole that provided, for Kant, both our freedom and our morality. Kant said we can’t die if we want to be moral, that we can’t die if we want to gain virtue, and that we can’t die if we want to remain free.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on their Futurists Board and their Life Extension Board) and contributes regularly to their blog.

References:

[1] Hughes, J. J. (2001). The Future of Death: Cryonics and the Telos of Liberal Individualism. Journal of Evolution &  Technology, 6 .

[2] Kant, I. (1996). In M.J. Gregor Practical Philosophy, Cambridge University Press.

[3] Kant, I. (1957). In T. M. Greene Kant selections, New York: Charles Scribner’s Sons.

[4] Ibid,. p. 350.

[5] Ameriks, K. (2000). Kant’s  Theory of Mind: An Analysis of the Paralogisms of Pure Reason: Oxford University Press.

[6] Ibid., p. 352.

[7] Ibid., p. 350.

[8] Ibid,. p. 358.

[9] Ibid,. p. 359.

[10] Ameriks, K. (2000). Kant’s Theory of Mind: An Analysis of the Paralogisms of Pure Reason: Oxford University Press.

[11] Freydberg, B. (2005). Imagination of  Kant’s critique of practical reason: Indiana University Press.

[12] Guyer, P. (2000). Kant on Freedom, Law,  and Happiness: Cambridge University Press.

This TRA feature has been edited in accordance with TRA’s Statement of Policy.