Browsed by
Tag: brain

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 2, 2013
******************************
This essay is the ninth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first eight chapters were previously published on The Rational Argumentator under the following titles:
***

The two approaches falling within this class considered thus far are (a) computational models that model the biophysical (e.g., electromagnetic, chemical, and kinetic) operation of the neurons—i.e., the physical processes instantiating their emergent functionality, whether at the scale of tissues, molecules and/or atoms, and anything in between—and (b) abstracted models, a term which designates anything that computationally models the neuron using the (sub-neuron but super-protein-complex) components themselves as the chosen model-scale (whereas the latter uses for its chosen model-scale the scale at which physical processes emergently instantiating those higher-level neuronal components exist, such as the membrane and individual proteins forming the transmembrane protein-complexes), regardless of whether each component is abstracted as a normative-electrical-component analogue (i.e., using circuit diagrams in place of biological schematics, like equating the lipid bilayer membrane with a capacitor connected to a variable battery) or mathematical models in which a relevant component or aspect of the neuron becomes a term (e.g., a variable or constant) in an equation.

It was during the process of trying to formulate different ways of mathematically (and otherwise computationally) modeling neurons or sub-neuron regions that I laid the conceptual embryo of the first new possible basis for subjective-continuity: the notion of operational isomorphism.

A New Approach to Subjective-Continuity Through Substrate Replacement

There are two other approaches to increasing the likelihood of subjective-continuity, each based on the presumption of two possible physical bases for discontinuity, that I explored during this period. Note that these approaches are unrelated to graduality, which has been the main determining factor impacting the likelihood of subjective-continuity considered thus far. The new approaches consist of designing the NRUs so as to retain the respective postulated physical bases for subjective-continuity that exist in the biological brain. Thus they are unrelated to increasing the efficacy of the gradual-replacement procedure itself, instead being related to the design requirements of functional-equivalents used to gradually replace the neurons that maintain immediate subjective-continuity.

Operational Isomorphism

Whereas functionality deals only with the emergent effects or end-product of a given entity or process, operationality deals with the procedural operations performed so as to give rise to those emergent effects. A mathematical model of a neuron might be highly functionally equivalent while failing to be operationally equivalent in most respects. Isomorphism can be considered a measure of “sameness”, but technically means a 1-to-1 correspondence between the elements of two sets (which would correspond with operational isomorphism) or between the sums or products of the elements of two sets (which would correspond with functional isomorphism, using the definition of functionality employed above). Thus, operational isomorphism is the degree with which the sub-components (be they material as in entities or procedural as in processes) of the two larger-scale components, or the operational modalities possessed by each respective collection of sub-components, are equivalent.

To what extent does the brain possess operational isomorphism? It seems to depend on the scale being considered. At the highest scale, different areas of the nervous system are classed as systems (as in functional taxonomies) or regions (as in anatomical taxonomies). At this level the separate regions (i.e., components of a shared scale) differ widely from one another in terms of operational-modality; they process information very differently from the way other components on the same scale process information. If this scale was chosen as the model-scale of our replication-approach and the preceding premise (that the physical basis for subjective-continuity is the degree of operational isomorphism between components at a given scale) is accepted, then we would in such a case have a high probability of replicating functionality, but a low probability of retaining subjective-continuity through gradual replacement. This would be true even if we used the degree of operational isomorphism between separate components as the only determining factor for subjective-continuity, and ignored concerns of graduality (e.g., the scale or rate—or scale-to-rate ratio—at which gradual substrate replacement occurs).

Contrast this to the molecular scale, where the operational modality of each component (being a given molecule) and the procedural rules determining the state-changes of components at this scale are highly isomorphic. The state-changes of a given molecule are determined by molecular and atomic forces. Thus if we use an informational-functionalist approach, choose a molecular scale for our model, and accept the same premises as the first example, we would have a high probability of both replicating functionality and retaining subjective-continuity through gradual replacement because the components (molecules) have a high degree of operational isomorphism.

Note that this is only a requirement for the sub-components instantiating the high-level neural regions/systems that embody our personalities and higher cognitive faculties such as the neocortex — i.e., we wouldn’t have to choose a molecular scale as our model scale (if it proved necessary for the reasons described above) for the whole brain, which would be very computationally intensive.

So at the atomic and molecular scale the brain possesses a high degree of operational isomorphism. On the scale of the individual protein complexes, which collectively form a given sub-neuronal component (e.g., ion channel), components still appear to possess a high degree of operational isomorphism because all state-changes are determined by the rules governing macroscale proteins and protein-complexes (i.e., biochemistry and particularly protein-protein interactions); by virtue of being of the same general constituents (amino acids), the factors determining state-changes at this level are shared by all components at this scale. The scale of individual neuronal components, however, seems to possess a comparatively lesser degree of operational isomorphism. Some ion channels are ligand-gated while others are voltage-gated. Thus, different aspects of physicality (i.e., molecular shape and voltage respectively) form the procedural-rules determining state-changes at this scale. Since there are now two different determining factors at this scale, its degree of operational isomorphism is comparatively less than the protein and protein-complex scale and the molecular scale, both of which appear to have only one governing procedural-rule set. The scale of individual neurons by contrast appears to possess a greater degree of operational isomorphism; every neuron fires according to its threshold value, and sums analog action-potential values into a binary output (i.e., neuron either fires or does not). All individual neurons operate in a highly isomorphic manner. Even though individual neurons of a given type are more operationally isomorphic in relation to each other than with a neuron of another type, all neurons regardless of type still act in a highly isomorphic manner. However, the scale of neuron-clusters and neural-networks, which operate and communicate according to spatiotemporal sequences of firing patterns (action-potential patterns), appears to possess a lesser degree of operational isomorphism compared to individual neurons, because different sequences of firing patterns will mean a different thing to two respective neural clusters or networks. Also note that at this scale the degree of functional isomorphism between components appears to be less than their degree of operational isomorphism—that is, the way each cluster or network operates is more similar in relation to each other than is their actual function (i.e., what they effectively do). And lastly, at the scale of high-level neural regions/systems, components (i.e., neural regions) differ significantly in morphology, in operationality, and in functionality; thus they appear to constitute the scale that possesses the least operational isomorphism.

I will now illustrate the concept of operational isomorphism using the physical-functionalist and the informational-functionalist NRU approaches, respectively, as examples. In terms of the physical-functionalist (i.e., prosthetic neuron) approach, both the passive (i.e., “direct”) and CPU-controlled sub-classes, respectively, are operationally isomorphic. An example of a physical-functionalist NRU that would not possess operational isomorphism is one that uses a passive-physicalist approach for the one type of component (e.g., voltage-gated ion channel) and a CPU-controlled/cyber-physicalist approach [see Part 4 of this series] for another type of component (e.g., ligand-gated ion channel)—on that scale the components act according to different technological and methodological infrastructures, exhibit different operational modalities, and thus appear to possess a low degree of operational isomorphism. Note that the concern is not the degree of operational isomorphism between the functional-replication units and their biological counterparts, but rather with the degree of operational isomorphism between the functional-replication units and other units on the same scale.

Another possibly relevant type of operational isomorphism is the degree of isomorphism between the individual sub-components or procedural-operations (i.e., “steps”) composing a given component, designated here as intra-operational isomorphism. While very similar to the degree of isomorphism for the scale immediately below, this differs from (i.e., is not equivalent to) such a designation in that the sub-components of a given larger component could be functionally isomorphic in relation to each other without being operationally isomorphic in relation to all other components on that scale. The passive sub-approach of the physical-functionalist approach would possess a greater degree of intra-operational isomorphism than would the CPU-controlled/cyber-physicalist sub-approach, because presumably each component would interact with the others (via physically embodied feedback) according to the same technological and methodological infrastructure—be it mechanical, electrical, chemical, or otherwise. The CPU-controlled sub-approach by contrast would possess a lesser degree of intra-operational-isomorphism, because the sensors, CPU, and the electric or electromechanical systems, respectively (the three main sub-components for each singular neuronal component—e.g., an artificial ion channel), operate according to different technological and methodological infrastructures and thus exhibit alternate operational modalities in relation to eachother.

In regard to the informational-functionalist approach, an NRU model that would be operationally isomorphic is one wherein, regardless of the scale used, the type of approach used to model a given component on that scale is as isomorphic with the ones used to model other components on the same scale as is possible. For example, if one uses a mathematical model to simulate spiking regions of the dendritic spine, then one shouldn’t use a non-mathematical (e.g., strict computational-logic) approach to model non-spiking regions of the dendritic spine. Since the number of variations to the informational-functionalist approach is greater than could exist for the physical-functionalist approach, there are more gradations to the degree of operational isomorphism. Using the exact same branches of mathematics to mathematically model the two respective components would incur a greater degree of operational isomorphism than if we used alternate mathematical techniques from different disciplines to model them. Likewise, if we used different computational approaches to model the respective components, then we would have a lesser degree of operational isomorphism. If we emulated some components while merely simulating others, we would have a lesser degree of operational isomorphism than if both were either strictly simulatory or strictly emulatory.

If this premise proves true, it suggests that when picking the scale of our replication-approach (be it physical-functionalist or informational-functionalist), we choose a scale that exhibits operational isomorphism—for example, the molecular scale rather than the scale of high-level neural-regions, and that we don’t use widely dissimilar types of modeling techniques to model one component (e.g., a molecular system) than we do for another component on the same scale.

Note that unlike operational-continuity, the degree of operational isomorphism was not an explicit concept or potential physical basis for subjective-continuity at the time of my working on immortality (i.e., this concept wasn’t yet fully fleshed out in 2010), but rather was formulated in response to going over my notes from this period so as to distill the broad developmental gestalt of my project; though it appears to be somewhat inherent (i.e., appears to be hinted at), it hasn’t been explicitized until relatively recently.

The next chapter describes the rest of my work on technological approaches to techno-immortality in 2010, focusing on a second new approach to subjective-continuity through a gradual-substrate-replacement procedure, and concluding with an overview of the ways my project differs from the other techno-immortalist projects.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 25, 2013
******************************
This essay is the eighth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first seven chapters were previously published on The Rational Argumentator under the following titles:
***

By 2009 I felt the major classes of physicalist-functionalist replication approaches to be largely developed, producing now only potential minor variations in approach and procedure. These developments consisted of contingency plans in the case that some aspect of neuronal operation couldn’t be replicated with alternate, non-biological physical systems and processes, based around the goal of maintaining those biological (or otherwise organic) systems and processes artificially and of integrating them with the processes that could be reproduced artificially.

2009 also saw further developments in the computational approach, where I conceptualized a new sub-division in the larger class of the informational-functionalist (i.e., computational, which encompasses both simulation and emulation) replication approach, which is detailed in the next chapter.

Developments in the Physicalist Approach

During this time I explored mainly varieties of the cybernetic-physical functionalist approach. This involved the use of replicatory units that preserve certain biological aspects of the neuron while replacing certain others with functionalist replacements, and other NRUs that preserved alternate biological aspects of the neuron while replacing different aspects with functional replacements. The reasoning behind this approach was twofold. The first was that there was a chance, no matter how small, that we might fail to sufficiently replicate some relevant aspect(s) of the neuron either computationally or physically by failing to understand the underlying principles of that particular sub-process/aspect. The second was to have an approach that would work in the event that there was some material aspect that couldn’t be sufficiently replicated via non-biological physically embodied systems (i.e., the normative physical-functionalist approach).

However, these varieties were conceived of in case we couldn’t replicate certain components successfully (i.e., without functional divergence). The chances of preserving subjective-continuity in such circumstances are increased by the number of varieties we have for this class of model (i.e., different arrangements of mechanical replacement components and biological components), because we don’t know which we would fail to functionally replicate.

This class of physical-functionalist model can be usefully considered as electromechanical-biological hybrids, wherein the receptors (i.e., transporter proteins) on the post-synaptic membrane are integrated with the artificial membrane and in coexistence with artificial ion-channels, or wherein the biological membrane is retained while the receptor and ion-channels are replaced with functional equivalents instead. The biological components would be extracted from the existing biological neurons and reintegrated with the artificial membrane. Otherwise they would have to be synthesized via electromechanical systems, such as, but not limited to, the use of chemical stores of amino-acids released in specific sequences to facilitate in vivo protein folding and synthesis, which would then be transported to and integrated with the artificial membrane. This is better than providing stores of pre-synthesized proteins, due to more complexities in storing synthesized proteins without decay or functional degradation over storage-time, and in restoring them from their “stored”, inactive state to a functionally-active state when they were ready for use.

During this time I also explored the possibility of using the neuron’s existing protein-synthesis systems to facilitate the construction and gradual integration of the artificial sections with the existing lipid bilayer membrane. Work in synthetic biology allows us to use viral gene vectors to replace a given cell’s constituent genome—and consequently allowing us to make it manufacture various non-organic substances in replacement of the substances created via its normative protein-synthesis. We could use such techniques to replace the existing protein-synthesis instructions with ones that manufacture and integrate the molecular materials constituting the artificial membrane sections and artificial ion-channels and ion-pumps. Indeed, it may even be a functional necessity to gradually replace a given neuron’s protein-synthesis machinery with protein-synthesis-based machinery for the replacement, integration and maintenance of the non-biological sections’ material, because otherwise those parts of the neuron would still be trying to rebuild each section of lipid bilayer membrane we iteratively remove and replace. This could be problematic, and so for successful gradual replacement of single neurons, a means of gradually switching off and/or replacing portions of the cell’s protein-synthesis systems may be required.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Neuronal “Scanning” and NRU Integration – Article by Franco Cortese

Neuronal “Scanning” and NRU Integration – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 23, 2013
******************************
This essay is the seventh chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first six chapters were previously published on The Rational Argumentator under the following titles:
***

I was planning on using the NEMS already conceptually developed by Robert Freitas for nanosurgery applications (to be supplemented by the use of MEMS if the technological infrastructure was unavailable at the time) to take in vivo recordings of the salient neural metrics and properties needing to be replicated. One novel approach was to design the units with elongated, worm-like bodies, disposing the computational and electromechanical apparatus within the elongated body of the unit. This sacrifices width for length so as to allow the units to fit inside the extra-cellular space between neurons and glial cells as a postulated solution to a lack of sufficient miniaturization. Moreover, if a unit is too large to be used in this way, extending its length by the same proportion would allow it to then operate in the extracellular space, provided that its means of data-measurement itself weren’t so large as to fail to fit inside the extracellular space (the span of ECF between two adjacent neurons for much of the brain is around 200 Angstroms).

I was planning on using the chemical and electrical sensing methodologies already in development for nanosurgery as the technological and methodological infrastructure for the neuronal data-measurement methodology. However, I also explored my own conceptual approaches to data-measurement. This consisted of detecting variation of morphological features in particular, as the schemes for electrical and chemical sensing already extant seemed either sufficiently developed or to be receiving sufficient developmental support and/or funding. One was the use of laser-scanning or more generally radiography (i.e., sonar) to measure and record morphological data. Another was a device that uses a 2D array of depressible members (e.g., solid members attached to a spring or ratchet assembly, which is operatively connected to a means of detecting how much each individual member is depressed—such as but not limited to piezoelectric crystals that produce electricity in response and proportion to applied mechanical strain). The device would be run along the neuronal membrane and the topology of the membrane would be subsequently recorded by the pattern of depression recordings, which are then integrated to provide a topographic map of the neuron (e.g., relative location of integral membrane components to determine morphology—and magnitude of depression to determine emergent topology). This approach could also potentially be used to identify the integral membrane proteins, rather than using electrical or chemical sensing techniques, if the topologies of the respective proteins are sufficiently different as to be detectable by the unit (determined by its degree of precision, which typically is a function of its degree of miniaturization).

The constructional and data-measurement units would also rely on the technological and methodological infrastructure for organization and locomotion that would be used in normative nanosurgery. I conceptually explored such techniques as the use of a propeller, the use of pressure-based methods (i.e., a stream of water acting as jet exhaust would in a rocket), the use of artificial cilia, and the use of tracks that the unit attaches to so as to be moved electromechanically, which decreases computational intensiveness – a measure of required computation per unit time – rather than having a unit compute its relative location so as to perform obstacle-avoidance and not, say, damage in-place biological neurons. Obstacle-avoidance and related concerns are instead negated through the use of tracks that limit the unit’s degrees of freedom—thus preventing it from having to incorporate computational techniques of obstacle-avoidance (and their entailed sensing apparatus). This also decreases the necessary precision (and thus, presumably, the required degree of miniaturization) of the means of locomotion, which would need to be much greater if the unit were to perform real-time obstacle avoidance. Such tracks would be constructed in iterative fashion. The constructional system would analyze the space in front of it to determine if the space was occupied by a neuron terminal or soma, and extrude the tracks iteratively (e.g., add a segment in spaces where it detects the absence of biological material). It would then move along the newly extruded track, progressively extending it through the spaces between neurons as it moves forward.

Non-Distortional in vivo Brain “Scanning”

A novel avenue of enquiry that occurred during this period involves counteracting or taking into account the distortions caused by the data-measurement units on the elements or properties they are measuring and subsequently applying such corrections to the recording data. A unit changes the local environment that it is supposed to be measuring and recording, which becomes problematic. My solution was to test which operations performed by the units have the potential to distort relevant attributes of the neuron or its environment and to build units that compensate for it either physically or computationally.

If we reduce how a recording unit’s operation distorts neuronal behavior into a list of mathematical rules, we can take the recordings and apply mathematical techniques to eliminate or “cancel out” those distortions post-measurement, thus arriving at what would have been the correct data. This approach would work only if the distortions are affecting the recorded data (i.e., changing it in predictable ways), and not if they are affecting the unit’s ability to actually access, measure, or resolve such data.

The second approach applies the method underlying the first approach to the physical environment of the neuron. A unit senses and records the constituents of the area of space immediately adjacent to its edges and mathematically models that “layer”; i.e., if it is meant to detect ionic solutions (in the case of ECF or ICF), then it would measure their concentration and subsequently model ionic diffusion for that layer. It then moves forward, encountering another adjacent “layer” and integrating it with its extant model. By being able to sense iteratively what is immediately adjacent to it, it can model the space it occupies as it travels through that space. It then uses electric or chemical stores to manipulate the electrical and chemical properties of the environment immediately adjacent to its surface, so as to produce the emergent effects of that model (i.e., the properties of the edges of that model and how such properties causally affect/impact adjacent sections of the environment), thus producing the emergent effects that would have been present if the NRU-construction/integration system or data-measuring system hadn’t occupied that space.

The third postulated solution was the use of a grid comprised of a series of hollow recesses placed in front of the sensing/measuring apparatus. The grid is impressed upon the surface of the membrane. Each compartment isolates a given section of the neuronal membrane from the rest. The constituents of each compartment are measured and recorded, most probably via uptake of its constituents and transport to a suitable measuring apparatus. A simple indexing system can keep track of which constituents came from which grid (and thus which region of the membrane they came from). The unit has a chemical store operatively connected to the means of locomotion used to transport the isolated membrane-constituents to the measuring/sensing apparatus. After a given compartment’s constituents are measured and recorded, the system then marks its constituents (determined by measurement and already stored as recordings by this point of the process), takes an equivalent molecule or compound from a chemical inventory, and replaces the substance it removed for measurement with the equivalent substance from its chemical inventory. Once this is accomplished for a given section of membrane, the grid then moves forward, farther into the membrane, leaving the replacement molecules/compounds from the biochemical inventory in the same respective spots as their original counterparts. It does this iteratively, making its way through a neuron and out the other side. This approach is the most speculative, and thus the least likely to be used. It would likely require the use of NEMS, rather than MEMS, as a necessary technological infrastructure, if the approach were to avoid becoming economically prohibitive, because in order for the compartment-constituents to be replaceable after measurement via chemical store, they need to be simple molecules and compounds rather than sections of emergent protein or tissue, which are comparatively harder to artificially synthesize and store in working order.

***

In the next chapter I describe the work done throughout late 2009 on biological/non-biological NRU hybrids, and in early 2010 on one of two new approaches to retaining subjective-continuity through a gradual replacement procedure, both of which are unrelated to concerns of graduality or sufficient functional equivalence between the biological original and the artificial replication-unit.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the sixth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first five chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“, and “Wireless Synapses, Artificial Plasticity, and Neuromodulation“.
***
Electromagnetic Theory of Mind
***

One line of thought I explored during this period of my conceptual work on life extension was concerned with whether it was not the material constituents of the brain manifesting consciousness, but rather the emergent electric or electromagnetic fields generated by the concerted operation of those material constituents, that instantiates mind. This work sprang from reading literature on Karl Pribram’s holonomic-brain theory, in which he developed a “holographic” theory of brain function. A hologram can be cut in half, and, if illuminated, each piece will still retain the whole image, albeit at a loss of resolution. This is due to informational redundancy in the recording procedure (i.e., because it records phase and amplitude, as opposed to just amplitude in normal photography). Pribram’s theory sought to explain the results of experiments in which a patient who had up to half his brain removed and nonetheless retained levels of memory and intelligence comparable to what he possessed prior to the procedure, and to explain the similar results of experiments in which the brain is sectioned and the relative organization of these sections is rearranged without the drastic loss in memory or functionality one would anticipate. These experiments appear to show a holonomic principle at work in the brain. I immediately saw the relation to gradual uploading, particularly the brain’s ability to take over the function of parts recently damaged or destroyed beyond repair. I also saw the emergent electric fields produced by the brain as much better candidates for exhibiting the material properties needed for such holonomic attributes. For one, electromagnetic fields (if considered as waves rather than particles) are continuous, rather than modular and discrete as in the case of atoms.

The electric-field theory of mind also seemed to provide a hypothetical explanatory model for the existence of subjective-continuity through gradual replacement. (Remember that the existence and successful implementation of subjective-continuity is validated by our subjective sense of continuity through normative metabolic replacement of the molecular constituents of our biological neurons— a.k.a. molecular turnover). If the emergent electric or electromagnetic fields of the brain are indeed holonomic (i.e., possess the attribute of holographic redundancy), then a potential explanatory model to account for why the loss of a constituent module (i.e., neuron, neuron cluster, neural network, etc.) fails to cause subjective-discontinuity is provided. Namely, subjective-continuity is retained because the loss of a constituent part doesn’t negate the emergent information (the big picture), but only eliminates a fraction of its original resolution. This looked like empirical support for the claim that it is the electric fields, rather than the material constituents of the brain, that facilitate subjective-continuity.

Another, more speculative aspect of this theory (i.e., not supported by empirical research or literature) involved the hypothesis that the increased interaction among electric fields in the brain (i.e., interference via wave superposition, the result of which is determined by both phase and amplitude) might provide a physical basis for the holographic/holonomic property of “informational redundancy” as well, if it was found that electric fields do not already possess or retain the holographic-redundancy attributes mentioned (i.e., interference via wave superposition, which involves a combination of both phase and amplitude).

A local electromagnetic field is produced by the electrochemical activity of the neuron. This field then undergoes interference with other local fields; and at each point up the scale, we have more fields interfering and combining. The level of disorder makes the claim that salient computation is occurring here dubious, due to the lack of precision and high level of variability which provides an ample basis for dysfunction (including increased noise, lack of a stable — i.e., static or material — means of information storage, and poor signal transduction or at least a high decay rate for signal propagation). However, the fact that they are interfering at every scale means that the local electric fields contain not only information encoding the operational states and functional behavior of the neuron it originated from, but also information encoding the operational states of other neurons by interacting, interfering, and combining with the electric fields produced by those other neurons (by electromagnetic fields interfering and combining in both amplitude and phase, as in holography, and containing information about other neurons by having interfered with their corresponding EM fields; thus if one neuron dies, some of its properties could have been encoded in other EM-waves) appeared to provide a possible physical basis for the brain’s hypothesized holonomic properties.

If electric fields are the physically continuous process that allows for continuity of consciousness (i.e., theories of emergence), then this suggests that computational substrates instantiating consciousness need to exhibit similar properties. This is not a form of vitalism, because I am not postulating that some extra-physical (i.e., metaphysical) process instantiates consciousness, but rather that a material aspect does, and that such an aspect may have to be incorporated in any attempts at gradual substrate replacement meant to retain subjective-continuity through the procedure. It is not a matter of simulating the emergent electric fields using normative computational hardware, because it is not that the electric fields provide the functionality needed, or implement some salient aspect of computation that would otherwise be left out, but rather that the emergent EM fields form a physical basis for continuity and emergence unrelated to functionality but imperative to experiential-continuity or subjectivity—which I distinguish from the type of subjective-continuity thus far discussed, that is, of a feeling of being the same person through the process of gradual substrate replacement—via the term “immediate subjective-continuity”, as opposed to “temporal subjective-continuity”. Immediate subjective-continuity is the capacity to feel, period. Temporal subjective-continuity is the state of feeling like the same person you were. Thus while temporal subjective-continuity inherently necessitates immediate subjective-continuity, immediate subjective-continuity does not require temporal subjective-continuity as a fundamental prerequisite.

Thus I explored variations of NRU-operational-modality that incorporate this (i.e., prosthetics on the cellular scale) particularly the informational-functionalist (i.e., computational) NRUs, as the physical-functionalist NRUs were presumed to instantiate these same emergent fields via their normative operation. The approach consisted of either (a) translating the informational output of the models into the generation of physical fields (either at the end of the process, or throughout by providing the internal area or volume of the unit with a grid composed of electrically conductive nodes, such that the voltage patterns can be physically instantiated in temporal synchrony with the computational model, or (b) constructing the computational substrate instantiating the computational model so as to generate emergent electric fields in a manner as consistent with biological operation as possible (e.g., in the brain a given neuron is never in an electrically neutral state, never completely off, but rather always in a range of values between on and off [see Chapter 2], which means that there is never a break — i.e., spatiotemporal region of discontinuity — in its emergent electric fields; these operational properties would have to be replicated by any computational substrate used to replicate biological neurons via the informationalist-functionalist approach, if the premises that it facilitates immediate subjective-continuity are correct).

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 19, 2013
******************************
This essay is the fourth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first three chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, and “Concepts for Functional Replication of Biological Neurons“.
***

Gradual Uploading Applied to Single Neurons (2008)

In early 2008 I was trying to conceptualize a means of applying the logic of gradual replacement to single neurons under the premise that extending the scale of gradual replacement to individual sections of the neuronal membrane and its integral membrane proteins—thus increasing the degree of graduality between replacement sections—would increase the likelihood of subjective-continuity through substrate transfer. I also started moving away from the use of normative nanotechnology as the technological and methodological infrastructure for the NRUs, as it would delay the date at which these systems could be developed and experimentally verified. Instead I started focusing on conceptualizing systems that electromechanically replicate the functional modalities of the small-scale integral-membrane-components of the neuron. I was calling this approach the “active mechanical membrane” to differentiate it from the electro-chemical-mechanical modalities of the nanotech approach. I also started using MEMS rather than NEMS for the underlying technological infrastructure (because MEMS are less restrictive) while identifying NEMS as preferred.

I felt that trying to replicate the metabolic replacement rate in biological neurons should be the ideal to strive for, since we know that subjective-continuity is preserved through the gradual metabolic replacement (a.k.a. molecular-turnover) that occurs in the existing biological brain. My approach was to measure the normal rate of metabolic replacement in existing biological neurons and the scale at which such replacement occurs (i.e., are the sections being replaced metabolically with single molecules, molecular complexes, or whole molecular clusters?). Then, when replacing sections of the membrane with electromechanical functional equivalents, the same ratio of replacement-section size to replacement-time factor would be applied—that is, the time between sectional replacement would be increased in proportion to how much larger the sectional-replacement section/scale is compared to the existing scale of metabolic replacement-sections/scale. Replacement size/scale is defined as the size of the section being replaced—and so would be molecular complexes in the case of normative metabolic replacement. Replacement time is defined as the interval of time between a given section being replaced and a section that it has causal connection with is replaced; in metabolic replacement it is the time interval between a given molecular complex being replaced and an adjacent (or directly-causally-connected) molecular complex being replaced.

I therefore posited the following formula:

 Ta = (Sa/Sb)*Tb,

where Sa is the size of the artificial-membrane-replacement sections, Sb is the size of the metabolic replacement sections, Tb is the time interval between the metabolic replacement of two successive metabolic replacement sections, and Ta is the time interval needing to be applied to the comparatively larger artificial-membrane-replacement sections so as to preserve the same replacement-rate factor (and correspondingly the same degree of graduality) that exists in normative metabolic replacement through the process of gradual replacement on the comparatively larger scale of the artificial-membrane sections.

The use of the time-to-scale factor corresponding with normative molecular turnover or “metabolic replacement” follows from the fact that we know subjective-continuity through substrate replacement is successful at this time-to-scale ratio. However, the lack of a non-arbitrarily quantifiable measure of time and the fact that that time is infinitely divisible (i.e., it can be broken down into smaller intervals to an arbitrarily large degree) logically necessitates that the salient variable is not time, but rather causal interaction between co-affective or “causally coupled” components. Interaction between components and the state transitions each component or procedural step undergo are the only viable quantifiable measures of time. Thus, while time is the relevant variable in the above equation, a better (i.e., more methodologically rigorous) variable would be a measure of either (a) the number of causal interactions occurring between co-affective or “adjacent” components within the interval of replacement time Ta, which is synonymous with the frequency of causal interaction; or (b) the number of state-transitions a given component undergoes within the interval of time Ta. While they should be generally correlative, in that state-transitions are facilitated via causal interaction among components, state-transitions may be a better metric because they allow us to quantitatively compare categorically dissimilar types of causal interaction that otherwise couldn’t be summed into a single variable or measure. For example, if one type of molecular interaction has a greater effect on the state-transitions of either component involved (i.e., facilitates a comparatively greater state-transition) than does another type of molecular interaction, then quantifying a measure of causal interactions may be less accurate than quantifying a measure of the magnitude of state-transitions.

In this way the rate of gradual replacement, despite being on a scale larger than normative metabolic replacement, would hypothetically follow the same degree of graduality with which biological metabolic replacement occurs. This was meant to increase the likelihood of subjective-continuity through a substrate-replacement procedure (both because it is necessarily more gradual than gradual replacement of whole individual neurons at a time, and because it preserves the degree of graduality that exists through the normative metabolic replacement that we already undergo).

Replicating Neuronal Membrane and Integral Membrane Components

Thus far there have been 2 main classes of neuron-replication approach identified: informational-functionalist and physical-functionalist, the former corresponding to computational and simulation/emulation approaches and the latter to physically embodied, “prosthetic” approaches.

The physicalist-functionalist approach, however, can at this point be further sub-divided into two sub-classes. The first can be called “cyber-physicalist-functionalist”, which involves controlling the artificial ion-channels and receptor-channels via normative computation (i.e., an internal CPU or controller-circuit) operatively connected to sensors and to the electromechanical actuators and components of the ion and receptor channels (i.e., sensing the presence of an electrochemical gradient or difference in electrochemical potential [equivalent to relative ionic concentration] between the respective sides of a neuronal membrane, and activating the actuators of the artificial channels to either open or remain closed, based upon programmed rules). This sub-class is an example of a cyber-physical system, which designates any system with a high level of connection or interaction between its physical and computational components, itself a class of technology that grew out of embedded systems, which designates any system using embedded computational technology and includes many electronic devices and appliances.

This is one further functional step removed from the second approach, which I was then simply calling the “direct” method, but which would be more accurately called the passive-physicalist-functionalist approach. Electronic systems are differentiated from electric systems by being active (i.e., performing computation or more generally signal-processing), whereas electric systems are passive and aren’t meant to transform (i.e., process) incoming signals (though any computational system’s individual components must at some level be comprised of electric, passive components). Whereas the cyber-physicalist-functionalist sub-class has computational technology controlling its processes, the passive-physicalist-functionalist approach has components emergently constituting a computational device. This consisted of providing the artificial ion-channels with a means of opening in the presence of a given electric potential difference (i.e., voltage) and the receptor-channels with a means of opening in response to the unique attributes of the neurotransmitter it corresponds to (such as chemical bonding as in ligand-based receptors, or alternatively in response to its electrical properties in the same manner – i.e., according to the same operational-modality – as the artificial ion channels), without a CPU correlating the presence of an attribute measured by sensors with the corresponding electromechanical behavior of the membrane needing to be replicated in response thereto. Such passive systems differ from computation in that they only require feedback between components, wherein a system of mechanical, electrical, or electromechanical components is operatively connected so as to produce specific system-states or processes in response to the presence of specific sensed system-states of its environment or itself. An example of this in regards to the present case would be constructing an ionic channel from piezoelectric materials, such that the presence of a certain electrochemical potential induces internal mechanical strain in the material; the spacing, dimensions and quantity of segments would be designed so as to either close or open, respectively, as a single unit when eliciting internal mechanical strain in response to one electrochemical potential while remaining unresponsive (or insufficiently responsive—i.e., not opening all the way) to another electrochemical potential. Biological neurons work in a similarly passive way, in which systems are organized to exhibit specific responses to specific stimuli in basic stimulus-response causal sequences by virtue of their own properties rather than by external control of individual components via CPU.

However, I found the cyber-physicalist approach preferable if it proved to be sufficient due to the ability to reprogram computational systems, which isn’t possible in passive systems without necessitating a reorganization of the component—which itself necessitates an increase in the required technological infrastructure, thereby increasing cost and design-requirements. This limit on reprogramming also imposes a limit on our ability to modify and modulate the operation of the NRUs (which will be necessary to retain the function of neural plasticity—presumably a prerequisite for experiential subjectivity and memory). The cyber-physicalist approach also seemed preferable due to a larger degree of variability in its operation: it would be easier to operatively connect electromechanical membrane components (e.g., ionic channels, ion pumps) to a CPU, and through the CPU to sensors, programming it to elicit a specific sequence of ionic-channel opening and closing in response to specific sensor-states, than it would be to design artificial ionic channels to respond directly to the presence of an electric potential with sufficient precision and accuracy.

In the cyber-physicalist-functionalist approach the membrane material is constructed so as to be (a) electrically insulative, while (b) remaining thin enough to act as a capacitor via the electric potential differential (which is synonymous with voltage) between the two sides of the membrane.

The ion-channel replacement units consisted of electromechanical pores that open for a fixed amount of time in the presence of an ion gradient (a difference in electric potential between the two sides of the membrane); this was to be accomplished electromechanically via a means of sensing membrane depolarization (such as through the use of reference electrodes) connected to a microcircuit (or nanocircuit, hereafter referred to as a CPU) programmed to open the electromechanical ion-channels for a length of time corresponding to the rate of normative biological repolarization (i.e., the time it takes to restore the membrane polarization to the resting-membrane-potential following an action-potential), thus allowing the influx of ions at a rate equal to the biological ion-channels. Likewise sections of the pre-synaptic membrane were to be replaced by a section of inorganic membrane containing units that sense the presence of the neurotransmitter corresponding to the receptor being replaced, which were to be connected to a microcircuit programmed to elicit specific changes (i.e., increase or decrease in ionic permeability, such as through increasing or decreasing the diameter of ion-channels—e.g., through an increase or decrease in electric stimulation of piezoelectric crystals, as described above—or an increase or decrease in the number of open channels) corresponding to the change in postsynaptic potential in the biological membrane resulting from postsynaptic receptor-binding. This requires a bit more technological infrastructure than I anticipated the ion-channels requiring.

While the accurate and active detection of particular types and relative quantities of neurotransmitters is normally ligand-gated, we have a variety of potential, mutually exclusive approaches. For ligand-based receptors, sensing the presence and steepness of electrochemical gradients may not suffice. However, we don’t necessarily have to use ligand-receptor fitting to replicate the functionality of ligand-based receptors. If there is a difference in the charge (i.e., valence) between the neurotransmitter needing to be detected and other neurotransmitters, and the degree of that difference is detectable given the precision of our sensing technologies, then a means of sensing a specific charge may prove sufficient. I developed an alternate method for ligand-based receptor fitting in the event that sensing-electric charge proved insufficient, however. Different chemicals (e.g., neurotransmitters, but also potentially electrolyte solutions) have different volume-to-weight ratios. We equip the artificial-membrane sections with an empty compartment capable of measuring the weight of its contents. Since the volume of the container is already known, this would allow us to identify specific neurotransmitters (or other relevant molecules and compounds) based on their unique weight-to-volume ratio. By operatively connecting the unit’s CPU to this sensor, we can program specific operations (i.e., receptor opens allowing entry for fixed amount of time, or remains closed) in response to the detection of specific neurotransmitters. Though it is unlikely to be necessitated, this method could also work for the detection of specific ions, and thus could work as the operating mechanism underlying the artificial ion-channels as well—though this would probably require higher-precision volume-to-weight comparison than is required for neurotransmitters.

Sectional Integration with Biological Neurons

Integrating replacement-membrane sections with adjacent sections of the existing lipid bilayer membrane becomes a lot less problematic if the scale at which the membrane sections are handled (determined by the size of the replacement membrane sections) is homogenous, as in the case of biological tissues, rather than molecularly heterogeneous—that is, if we are affixing the edges to a biological tissue, rather than to complexes of individual lipid molecules. Reasons for hypothesizing a higher probability for homogeneity at the replacement scale include (a) the ability of experimenters and medical researchers to puncture the neuronal membrane with a micropipette (so as to measure membrane voltage) without rupturing the membrane beyond functionality, and (b) the fact that sodium and potassium ions do not leak through the gaps between the individual bilipid molecules, which would be present if it were heterogeneous at this scale. If we find homogeneity at the scale of sectional replacement, we can use more normative means of affixing the edges of the replacement membrane section with the existing lipid bilayer membrane, such as micromechanical fasteners, adhesive, or fusing via heating or energizing. However, I also developed an approach applicable if the scale of sectional replacement was found to be molecular and thus heterogeneous. We find an intermediate chemical that stably bonds to both the bilipid molecules constituting the membrane and the molecules or compounds constituting the artificial membrane section. Note that if the molecules or compounds constituting either must be energized so as to put them in an abnormal (i.e., unstable) energy state to make them susceptible to bonding, this is fine so long as the energies don’t reach levels damaging to the biological cell (or if such energies could be absorbed prior to impinging upon or otherwise damaging the biological cell). If such an intermediate molecule or compound cannot be found, a second intermediate chemical that stably bonds with two alternate and secondary intermediate molecules (which themselves bond to either the biological membrane or the non-biological membrane section, respectively) can be used. The chances of finding a sequence of chemicals that stably bond (i.e., a given chemical forms stable bonds with the preceding and succeeding chemicals in the sequence) increases in proportion to the number of intermediate chemicals used. Note that it might be possible to apply constant external energization to certain molecules so as to force them to bond in the case that a stable bond cannot be formed, but this would probably be economically prohibitive and potentially dangerous, depending on the levels of energy and energization-precision.

I also worked on the means of constructing and integrating these components in vivo, using MEMS or NEMS. Most of the developments in this regard are described in the next chapter. However, some specific variations on construction procedure were necessitated by the sectional-integration procedure, which I will comment on here. The integration unit would position itself above the membrane section. Using the data acquired by the neuron data-measurement units, which specify the constituents of a given membrane section and assign it a number corresponding to a type of artificial-membrane section in the integration unit’s section-inventory (essentially a store of stacked artificial-membrane-sections). A means of disconnecting a section of lipid bilayer membrane from the biological neuron is depressed. This could be a hollow rectangular compartment with edges that sever the lipid bilayer membrane via force (e.g., edges terminate in blades), energy (e.g., edges terminate in heat elements), or chemical corrosion (e.g., edges coated with or secrete a corrosive substance). The detached section of lipid bilayer membrane is then lifted out and compacted, to be drawn into a separate compartment for storing waste organic materials. The artificial-membrane section is subsequently transported down through the same compartment. Since it is perpendicular to the face of the container, moving the section down through the compartment should force the intra-cellular fluid (which would have presumably leaked into the constructional container’s internal area when the lipid bilayer membrane-section was removed) back into the cell. Once the artificial-membrane section is in place, the preferred integration method is applied.

Sub-neuronal (i.e., sectional) replacement also necessitates that any dynamic patterns of polarization (e.g., an action potential) are continuated during the interval of time between section removal and artificial-section integration. This was to be achieved by chemical sensors (that detect membrane depolarization) operatively connected to actuators that manipulate ionic concentration on the other side of the membrane gap via the release or uptake of ions from biochemical inventories so as to induce membrane depolarization on the opposite side of the membrane gap at the right time. Such techniques as partially freezing the cell so as to slow the rate of membrane depolarization and/or the propagation velocity of action potentials were also considered.

The next chapter describes my continued work in 2008, focusing on (a) the design requirements for replicating the neural plasticity necessary for memory and subjectivity, (b) the active and conscious modulation and modification of neural operation, (c) wireless synaptic transmission, (d) on ways to integrate new neural networks (i.e., mental amplification and augmentation) without disrupting the operation of existing neural networks and regions, and (e) a gradual transition from or intermediary phase between the physical (i.e., prosthetic) approach and the informational (i.e., computational, or mind-uploading proper) approach.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Churchland, P. S. (1989). Neurophilosophy: Toward a Unified Science of the Mind/Brain.  MIT Press, p. 30.

Pribram, K. H. (1971). Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology. New York: Prentice Hall/Brandon House.

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf