Browsed by
Tag: brain emulation

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 2, 2013
******************************
This essay is the ninth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first eight chapters were previously published on The Rational Argumentator under the following titles:
***

The two approaches falling within this class considered thus far are (a) computational models that model the biophysical (e.g., electromagnetic, chemical, and kinetic) operation of the neurons—i.e., the physical processes instantiating their emergent functionality, whether at the scale of tissues, molecules and/or atoms, and anything in between—and (b) abstracted models, a term which designates anything that computationally models the neuron using the (sub-neuron but super-protein-complex) components themselves as the chosen model-scale (whereas the latter uses for its chosen model-scale the scale at which physical processes emergently instantiating those higher-level neuronal components exist, such as the membrane and individual proteins forming the transmembrane protein-complexes), regardless of whether each component is abstracted as a normative-electrical-component analogue (i.e., using circuit diagrams in place of biological schematics, like equating the lipid bilayer membrane with a capacitor connected to a variable battery) or mathematical models in which a relevant component or aspect of the neuron becomes a term (e.g., a variable or constant) in an equation.

It was during the process of trying to formulate different ways of mathematically (and otherwise computationally) modeling neurons or sub-neuron regions that I laid the conceptual embryo of the first new possible basis for subjective-continuity: the notion of operational isomorphism.

A New Approach to Subjective-Continuity Through Substrate Replacement

There are two other approaches to increasing the likelihood of subjective-continuity, each based on the presumption of two possible physical bases for discontinuity, that I explored during this period. Note that these approaches are unrelated to graduality, which has been the main determining factor impacting the likelihood of subjective-continuity considered thus far. The new approaches consist of designing the NRUs so as to retain the respective postulated physical bases for subjective-continuity that exist in the biological brain. Thus they are unrelated to increasing the efficacy of the gradual-replacement procedure itself, instead being related to the design requirements of functional-equivalents used to gradually replace the neurons that maintain immediate subjective-continuity.

Operational Isomorphism

Whereas functionality deals only with the emergent effects or end-product of a given entity or process, operationality deals with the procedural operations performed so as to give rise to those emergent effects. A mathematical model of a neuron might be highly functionally equivalent while failing to be operationally equivalent in most respects. Isomorphism can be considered a measure of “sameness”, but technically means a 1-to-1 correspondence between the elements of two sets (which would correspond with operational isomorphism) or between the sums or products of the elements of two sets (which would correspond with functional isomorphism, using the definition of functionality employed above). Thus, operational isomorphism is the degree with which the sub-components (be they material as in entities or procedural as in processes) of the two larger-scale components, or the operational modalities possessed by each respective collection of sub-components, are equivalent.

To what extent does the brain possess operational isomorphism? It seems to depend on the scale being considered. At the highest scale, different areas of the nervous system are classed as systems (as in functional taxonomies) or regions (as in anatomical taxonomies). At this level the separate regions (i.e., components of a shared scale) differ widely from one another in terms of operational-modality; they process information very differently from the way other components on the same scale process information. If this scale was chosen as the model-scale of our replication-approach and the preceding premise (that the physical basis for subjective-continuity is the degree of operational isomorphism between components at a given scale) is accepted, then we would in such a case have a high probability of replicating functionality, but a low probability of retaining subjective-continuity through gradual replacement. This would be true even if we used the degree of operational isomorphism between separate components as the only determining factor for subjective-continuity, and ignored concerns of graduality (e.g., the scale or rate—or scale-to-rate ratio—at which gradual substrate replacement occurs).

Contrast this to the molecular scale, where the operational modality of each component (being a given molecule) and the procedural rules determining the state-changes of components at this scale are highly isomorphic. The state-changes of a given molecule are determined by molecular and atomic forces. Thus if we use an informational-functionalist approach, choose a molecular scale for our model, and accept the same premises as the first example, we would have a high probability of both replicating functionality and retaining subjective-continuity through gradual replacement because the components (molecules) have a high degree of operational isomorphism.

Note that this is only a requirement for the sub-components instantiating the high-level neural regions/systems that embody our personalities and higher cognitive faculties such as the neocortex — i.e., we wouldn’t have to choose a molecular scale as our model scale (if it proved necessary for the reasons described above) for the whole brain, which would be very computationally intensive.

So at the atomic and molecular scale the brain possesses a high degree of operational isomorphism. On the scale of the individual protein complexes, which collectively form a given sub-neuronal component (e.g., ion channel), components still appear to possess a high degree of operational isomorphism because all state-changes are determined by the rules governing macroscale proteins and protein-complexes (i.e., biochemistry and particularly protein-protein interactions); by virtue of being of the same general constituents (amino acids), the factors determining state-changes at this level are shared by all components at this scale. The scale of individual neuronal components, however, seems to possess a comparatively lesser degree of operational isomorphism. Some ion channels are ligand-gated while others are voltage-gated. Thus, different aspects of physicality (i.e., molecular shape and voltage respectively) form the procedural-rules determining state-changes at this scale. Since there are now two different determining factors at this scale, its degree of operational isomorphism is comparatively less than the protein and protein-complex scale and the molecular scale, both of which appear to have only one governing procedural-rule set. The scale of individual neurons by contrast appears to possess a greater degree of operational isomorphism; every neuron fires according to its threshold value, and sums analog action-potential values into a binary output (i.e., neuron either fires or does not). All individual neurons operate in a highly isomorphic manner. Even though individual neurons of a given type are more operationally isomorphic in relation to each other than with a neuron of another type, all neurons regardless of type still act in a highly isomorphic manner. However, the scale of neuron-clusters and neural-networks, which operate and communicate according to spatiotemporal sequences of firing patterns (action-potential patterns), appears to possess a lesser degree of operational isomorphism compared to individual neurons, because different sequences of firing patterns will mean a different thing to two respective neural clusters or networks. Also note that at this scale the degree of functional isomorphism between components appears to be less than their degree of operational isomorphism—that is, the way each cluster or network operates is more similar in relation to each other than is their actual function (i.e., what they effectively do). And lastly, at the scale of high-level neural regions/systems, components (i.e., neural regions) differ significantly in morphology, in operationality, and in functionality; thus they appear to constitute the scale that possesses the least operational isomorphism.

I will now illustrate the concept of operational isomorphism using the physical-functionalist and the informational-functionalist NRU approaches, respectively, as examples. In terms of the physical-functionalist (i.e., prosthetic neuron) approach, both the passive (i.e., “direct”) and CPU-controlled sub-classes, respectively, are operationally isomorphic. An example of a physical-functionalist NRU that would not possess operational isomorphism is one that uses a passive-physicalist approach for the one type of component (e.g., voltage-gated ion channel) and a CPU-controlled/cyber-physicalist approach [see Part 4 of this series] for another type of component (e.g., ligand-gated ion channel)—on that scale the components act according to different technological and methodological infrastructures, exhibit different operational modalities, and thus appear to possess a low degree of operational isomorphism. Note that the concern is not the degree of operational isomorphism between the functional-replication units and their biological counterparts, but rather with the degree of operational isomorphism between the functional-replication units and other units on the same scale.

Another possibly relevant type of operational isomorphism is the degree of isomorphism between the individual sub-components or procedural-operations (i.e., “steps”) composing a given component, designated here as intra-operational isomorphism. While very similar to the degree of isomorphism for the scale immediately below, this differs from (i.e., is not equivalent to) such a designation in that the sub-components of a given larger component could be functionally isomorphic in relation to each other without being operationally isomorphic in relation to all other components on that scale. The passive sub-approach of the physical-functionalist approach would possess a greater degree of intra-operational isomorphism than would the CPU-controlled/cyber-physicalist sub-approach, because presumably each component would interact with the others (via physically embodied feedback) according to the same technological and methodological infrastructure—be it mechanical, electrical, chemical, or otherwise. The CPU-controlled sub-approach by contrast would possess a lesser degree of intra-operational-isomorphism, because the sensors, CPU, and the electric or electromechanical systems, respectively (the three main sub-components for each singular neuronal component—e.g., an artificial ion channel), operate according to different technological and methodological infrastructures and thus exhibit alternate operational modalities in relation to eachother.

In regard to the informational-functionalist approach, an NRU model that would be operationally isomorphic is one wherein, regardless of the scale used, the type of approach used to model a given component on that scale is as isomorphic with the ones used to model other components on the same scale as is possible. For example, if one uses a mathematical model to simulate spiking regions of the dendritic spine, then one shouldn’t use a non-mathematical (e.g., strict computational-logic) approach to model non-spiking regions of the dendritic spine. Since the number of variations to the informational-functionalist approach is greater than could exist for the physical-functionalist approach, there are more gradations to the degree of operational isomorphism. Using the exact same branches of mathematics to mathematically model the two respective components would incur a greater degree of operational isomorphism than if we used alternate mathematical techniques from different disciplines to model them. Likewise, if we used different computational approaches to model the respective components, then we would have a lesser degree of operational isomorphism. If we emulated some components while merely simulating others, we would have a lesser degree of operational isomorphism than if both were either strictly simulatory or strictly emulatory.

If this premise proves true, it suggests that when picking the scale of our replication-approach (be it physical-functionalist or informational-functionalist), we choose a scale that exhibits operational isomorphism—for example, the molecular scale rather than the scale of high-level neural-regions, and that we don’t use widely dissimilar types of modeling techniques to model one component (e.g., a molecular system) than we do for another component on the same scale.

Note that unlike operational-continuity, the degree of operational isomorphism was not an explicit concept or potential physical basis for subjective-continuity at the time of my working on immortality (i.e., this concept wasn’t yet fully fleshed out in 2010), but rather was formulated in response to going over my notes from this period so as to distill the broad developmental gestalt of my project; though it appears to be somewhat inherent (i.e., appears to be hinted at), it hasn’t been explicitized until relatively recently.

The next chapter describes the rest of my work on technological approaches to techno-immortality in 2010, focusing on a second new approach to subjective-continuity through a gradual-substrate-replacement procedure, and concluding with an overview of the ways my project differs from the other techno-immortalist projects.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf