Browsed by
Tag: Eric Drexler

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 30, 2013
******************************
One of the most common arguments made against Transhumanism, Technoprogressivism, and the transformative potentials of emerging, converging, disruptive and transformative technologies may also be the weakest: technical infeasibility. While some thinkers attack the veracity of Transhumanist claims on moral grounds, arguing that we are committing a transgression against human dignity (in turn often based on ontological grounds of a static human nature that shan’t be tampered with) or on grounds of safety, arguing that humanity isn’t responsible enough to wield such technologies without unleashing their destructive capabilities, these categories of counter-argument (efficacy and safety, respectively) are more often than not made by people somewhat more familiar with the community and its common points of rhetoric.
***
In other words these are the real salient and significant problems needing to be addressed by Transhumanist and Technoprogressive communities. The good news is that the ones making the most progress in terms of deliberating the possible repercussions of emerging technologies are Transhumanist and Technoprogressive communities. The large majority of thinkers and theoreticians working on Existential Risk and Global Catastrophic Risk, like The Future of Humanity Institute and the Lifeboat Foundation, share Technoprogressive inclinations. Meanwhile, the largest proponents of the need to ensure wide availability of enhancement technologies, as well as the need for provision of personhood rights to non-biologically-substrated persons, are found amidst the ranks of Technoprogressive Think Tanks like the IEET.
***

A more frequent Anti-Transhumanist and Anti-Technoprogressive counter-argument, by contrast, and one most often launched by people approaching Transhumanist and Technoprogressive communities from the outside, with little familiarity with their common points of rhetoric, is the claim of technical infeasibility based upon little more than sheer incredulity.

Sometimes a concept or notion simply seems too unprecedented to be possible. But it’s just too easy for us to get stuck in a spacetime rut along the continuum of culture and feel that if something were possible, it would have either already happened or would be in the final stages of completion today. “If something is possible, when why hasn’t anyone done it Shouldn’t the fact that it has yet to be accomplished indicate that it isn’t possible?” This conflates ought with is (which Hume showed us is a fallacy) and ought with can. Ought is not necessarily correlative with either. At the risk of saying the laughably-obvious, something must occur at some point in order for it to occur at all. The Moon landing happened in 1969 because it happened in 1969, and to have argued in 1968 that it simply wasn’t possible solely because it had never been done before would not have been  a valid argument for its technical infeasibility.

If history has shown us anything, it has shown us that history is a fantastically poor indicator of what will and will not become feasible in the future. Statistically speaking, it seems as though the majority of things that were said to be impossible to implement via technology have nonetheless come into being. Likewise, it seems as though the majority of feats it was said to be possible to facilitate via technology have also come into being. The ability to possiblize the seemingly impossible via technological and methodological in(ter)vention has been exemplified throughout the course of human history so prominently that we might as well consider it a statistical law.

We can feel the sheer fallibility of the infeasibility-from-incredulity argument intuitively when we consider how credible it would have seemed a mere 100 years ago to claim that we would soon be able to send sentences into the air, to be routed to a device in your pocket (and only your pocket, not the device in the pocket of the person sitting right beside you). How likely would it have seemed 200 years ago if you claimed that 200 years hence it would be possible to sit comfortably and quietly in a chair in the sky, inside a large tube of metal that fails to fall fatally to the ground?

Simply look around you. An idiosyncratic genus of great ape did this! Consider how remarkably absurd it would seem for the gorilla genus to have coordinated their efforts to build skyscrapers; to engineer devices that took them to the Moon; to be able to send a warning or mating call to the other side of the earth in less time than such a call could actually be made via physical vocal cords. We live in a world of artificial wonder, and act as though it were the most mundane thing in the world. But considered in terms of geological time, the unprecedented feat of culture and artificial artifact just happened. We are still in the fledging infancy of the future, which only began when we began making it ourselves.
***

We have no reason whatsoever to doubt the eventual technological feasibility of anything, really, when we consider all the things that were said to be impossible yet happened, all the things that were said to be possible and did happen, and all the things that were unforeseen completely yet happened nonetheless. In light of history, it seems more likely than a given thing would eventually be possible via technology than that it wouldn’t ever be possible. I fully appreciate the grandeur of this claim – but I stand by it nonetheless. To claim that a given ability will probably not be eventually possible to implement via technology is to laugh in the face of history to some extent.

The main exceptions to this claim are abilities wherein you limit or specify the route of implementation. Thus it probably would not be eventually possible to, say, infer the states of all the atoms comprising the Eifel Tower from the state of a single atom in your fingernail: categories of ability where you specify the implementation as the end-ability – as in the case above, the end ability was to infer the state of all the atoms in the Eifel Tower from the state of a single atom.

These exceptions also serve to illustrate the paramount feature allowing technology to possiblize the seemingly improbable: novel means of implementation. Very often there is a bottleneck in the current system we use to accomplish something that limits the scope of tis abilities and prevents certain objectives from being facilitated by it. In such cases a whole new paradigm of approach is what moves progress forward to realizing that objective. If the goal is the reversal and indefinite remediation of the causes and sources of aging, the paradigms of medicine available at the turn of the 20th century would have seemed to be unable to accomplish such a feat.

The new paradigm of biotechnology and genetic engineering was needed to formulate a scientifically plausible route to the reversal of aging-correlated molecular damage – a paradigm somewhat non-inherent in the medical paradigms and practices common at the turn of the 20th Century. It is the notion of a new route to implementation, a wholly novel way of making the changes that could lead to a given desired objective, that constitutes the real ability-actualizing capacity of technology – and one that such cases of specified implementation fail to take account of.

One might think that there are other clear exceptions to this as well: devices or abilities that contradict the laws of physics as we currently understand them – e.g., perpetual-motion machines. Yet even here we see many historical antecedents exemplifying our short-sighted foresight in regard to “the laws of physics”. Our understanding of the physical “laws” of the universe undergo massive upheaval from generation to generation. Thomas Kuhn’s The Structure of Scientific Revolutions challenged the predominant view that scientific progress occurred by accumulated development and discovery when he argued that scientific progress is instead driven by the rise of new conceptual paradigms categorically dissimilar to those that preceded it (Kuhn, 1962), and which then define the new predominant directions in research, development, and discovery in almost all areas of scientific discovery and conceptualization.

Kuhn’s insight can be seen to be paralleled by the recent rise in popularity of Singularitarianism, which today seems to have lost its strict association with I.J. Good‘s posited type of intelligence explosion created via recursively self-modifying strong AI, and now seems to encompass any vision of a profound transformation of humanity or society through technological growth, and the introduction of truly disruptive emerging and converging (e.g., NBIC) technologies.

This epistemic paradigm holds that the future is less determined by the smooth progression of existing trends and more by the massive impact of specific technologies and occurrences – the revolution of innovation. Kurzweil’s own version of Singularitarianism (Kurzweil, 2005) uses the systemic progression of trends in order to predict a state of affairs created by the convergence of such trends, wherein the predictable progression of trends points to their own destruction in a sense, as the trends culminate in our inability to predict past that point. We can predict that there are factors that will significantly impede our predictive ability thereafter. Kurzweil’s and Kuhn’s thinking are also paralleled by Buckminster Fuller in his notion of ephemeralization (i.e., doing more with less), the post-industrial information economies and socioeconomic paradigms described by Alvin Toffler (Toffler, 1970), John Naisbitt (Naisbitt 1982), and Daniel Bell (Bell, 1973), among others.

It can also partly be seen to be inherent in almost all formulations of technological determinism, especially variants of what I call reciprocal technological determinism (not simply that technology determines or largely constitutes the determining factors of societal states of affairs, not simply that tech affects culture, but rather than culture affects technology which then affects culture which then affects technology) a là Marshall McLuhan (McLuhan, 1964) . This broad epistemic paradigm, wherein the state of progress is more determined by small but radically disruptive changes, innovation, and deviations rather than the continuation or convergence of smooth and slow-changing trends, can be seen to be inherent in variants of technological determinism because technology is ipso facto (or by its very defining attributes) categorically new and paradigmically disruptive, and if culture is affected significantly by technology, then it is also affected by punctuated instances of unintended radical innovation untended by trends.

That being said, as Kurzweil has noted, a given technological paradigm “grows out of” the paradigm preceding it, and so the extents and conditions of a given paradigm will to some extent determine the conditions and allowances of the next paradigm. But that is not to say that they are predictable; they may be inherent while still remaining non-apparent. After all, the increasing trend of mechanical components’ increasing miniaturization could be seen hundreds of years ago (e.g., Babbage knew that the mechanical precision available via the manufacturing paradigms of his time would impede his ability in realizing his Baggage Engine, but that its implementation would one day be possible by the trend of increasingly precise manufacturing standards), but the fact that it could continue to culminate in the ephemeralization of Bucky Fuller (Fuller, 1976) or the mechanosynthesis of K. Eric Drexler (Drexler, 1986).

Moreover, the types of occurrence allowed by a given scientific or methodological paradigm seem at least intuitively to expand, rather than contract, as we move forward through history. This can be seen lucidly in the rise of Quantum Physics in the early 20th Century, which delivered such conceptual affronts to our intuitive notions of the possible as non-locality (i.e., quantum entanglement – and with it quantum information teleportation and even quantum energy teleportation, or in other words faster-than-light causal correlation between spatially separated physical entities), Einstein’s theory of relativity (which implied such counter-intuitive notions as measurement of quantities being relative to the velocity of the observer, e.g., the passing of time as measured by clocks will be different in space than on earth), and the hidden-variable theory of David Bohm (which implied such notions as the velocity of any one particle being determined by the configuration of the entire universe). These notions belligerently contradict what we feel intuitively to be possible. Here we have claims that such strange abilities as informational and energetic teleportation, faster-than-light causality (or at least faster-than-light correlation of physical and/or informational states) and spacetime dilation are natural, non-technological properties and abilities of the physical universe.

Technology is Man’s foremost mediator of change; it is by and large through the use of technology that we expand the parameters of the possible. This is why the fact that these seemingly fantastic feats were claimed to be possible “naturally”, without technological implementation or mediation, is so significant. The notion that they are possible without technology makes them all the more fantastical and intuitively improbable.

We also sometimes forget the even more fantastic claims of what can be done through the use of technology, such as stellar engineering and mega-scale engineering, made by some of big names in science. There is the Dyson Sphere of Freeman Dyson, which details a technological method of harnessing potentially the entire energetic output of a star (Dyson,  1960). One can also find speculation made by Dyson concerning the ability for “life and communication [to] continue for ever, using a finite store of energy” in an open universe by utilizing smaller and smaller amounts of energy to power slower and slower computationally emulated instances of thought (Dyson, 1979).

There is the Tipler Cylinder (also called the Tipler Time Machine) of Frank J. Tipler, which described a dense cylinder of infinite length rotating about its longitudinal axis to create closed timelike curves (Tipler, 1974). While Tipler speculated that a cylinder of finite length could produce the same effect if rotated fast enough, he didn’t provide a mathematical solution for this second claim. There is also speculation by Tipler on the ability to utilize energy harnessed from gravitational shear created by the forced collapse of the universe at different rates and different directions, which he argues would allow the universe’s computational capacity to diverge to infinity, essentially providing computationally emulated humans and civilizations the ability to run for an infinite duration of subjective time (Tipler, 1986, 1997).

We see such feats of technological grandeur paralleled by Kurt Gödel, who produced an exact solution to the Einstein field equations that describes a cosmological model of a rotating universe (Gödel, 1949). While cosmological evidence (e.g., suggesting that our universe is not a rotating one) indicates that his solution doesn’t describe the universe we live in, it nonetheless constitutes a hypothetically possible cosmology in which time-travel (again, via a closed timelike curve) is possible. And because closed timelike curves seem to require large amounts of acceleration – i.e. amounts not attainable without the use of technology – Gödel’s case constitutes a hypothetical cosmological model allowing for technological time-travel (which might be non-obvious, since Gödel’s case doesn’t involve such technological feats as a rotating cylinder of infinite length, rather being a result derived from specific physical and cosmological – i.e., non-technological – constants and properties).

These are large claims made by large names in science (i.e., people who do not make claims frivolously, and in most cases require quantitative indications of their possibility, often in the form of mathematical solutions, as in the cases mentioned above) and all of which are made possible solely through the use of technology. Such technological feats as the computational emulation of the human nervous system and the technological eradication of involuntary death pale in comparison to the sheer grandeur of the claims and conceptualizations outlined above.

We live in a very strange universe, which is easy to forget midst our feigned mundanity. We have no excuse to express incredulity at Transhumanist and Technoprogressive conceptualizations considering how stoically we accept such notions as the existence of sentient matter (i.e., biological intelligence) or the ability of a genus of great ape to stand on extraterrestrial land.

Thus, one of the most common counter-arguments launched at many Transhumanist and Technoprogressive claims and conceptualizations – namely, technical infeasibility based upon nothing more than incredulity and/or the lack of a definitive historical precedent – is one of the most baseless counter-arguments as well. It would be far more credible to argue for the technical infeasibility of a given endeavor within a certain time-frame. Not only do we have little, if any, indication that a given ability or endeavor will fail to eventually become realizable via technology given enough development-time, but we even have historical indication of the very antithesis of this claim, in the form of the many, many instances in which a given endeavor or feat was said to be impossible, only to be realized via technological mediation thereafter.

It is high time we accepted the fallibility of base incredulity and the infeasibility of the technical-infeasibility argument. I remain stoically incredulous at the audacity of fundamental incredulity, for nothing should be incredulous to man, who makes his own credibility in any case, and who is most at home in the necessary superfluous.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

References

Bell, D. (1973). “The Coming of Post-Industrial Society: A Venture in Social Forecasting, Daniel Bell.” New York: Basic Books, ISBN 0-465-01281-7.

Dyson, F. (1960) “Search for Artificial Stellar Sources of Infrared Radiation”. Science 131: 1667-1668.

Dyson, F. (1979). “Time without end: Physics and biology in an open universe,” Reviews of Modern Physics 51 (3): 447-460.

Fuller, R.B. (1938). “Nine Chains to the Moon.” Anchor Books pp. 252–59.

Gödel, K. (1949). “An example of a new type of cosmological solution of Einstein’s field equations of gravitation”. Rev. Mod. Phys. 21 (3): 447–450.

Kuhn, Thomas S. (1962). “The Structure of Scientific Revolutions (1st ed.).” University of Chicago Press. LCCN 62019621.

Kurzweil, R. (2005). “The Singularity is Near.” Penguin Books.

Mcluhan, M. (1964). “Understanding Media: The Extensions of Man”. 1st Ed. McGraw Hill, NY.

Niasbitt, J. (1982). “Megatrends.” Ten New Directions Transforming Our Lives. Warner Books.

Tipler, F. (1974) “Rotating Cylinders and Global Causality Violation”. Physical Review D9, 2203-2206.

Tipler, F. (1986). “Cosmological Limits on Computation”, International Journal of Theoretical Physics 25 (6): 617-661.

Tipler, F. (1997). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. New York: Doubleday. ISBN 0-385-46798-2.

Toffler, A. (1970). “Future shock.” New York: Random House.

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf