Browsed by
Tag: physicalism

Ontological Realism and Creating the One Real Future – Video by G. Stolyarov II

Ontological Realism and Creating the One Real Future – Video by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
August 23, 2014
******************************

An ongoing debate in ontology concerns the question of whether ideas or the physical reality have primacy. Mr. Stolyarov addresses the implications of the primacy of the physical reality for human agency in the pursuit of life and individual flourishing. Transhumanism and life extension are in particular greatly aided by an ontological realist (and physicalist) framework of thought.

References

– “Ontological Realism and Creating the One Real Future” – Essay by G. Stolyarov II
– “Objective Reality” – Video by David Kelley
A Rational Cosmology – Treatise by G. Stolyarov II
– “Putting Randomness in Its Place” – Essay by G. Stolyarov II
– “Putting Randomness in Its Place” – Video by G. Stolyarov II

Ontological Realism and Creating the One Real Future – Article by G. Stolyarov II

Ontological Realism and Creating the One Real Future – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
August 13, 2014
******************************

An ongoing debate in ontology concerns the question of whether ideas or the physical reality have primacy. In my view, the physical reality is clearly ontologically primary, because it makes possible the thinking and idea-generation which exist only as very sophisticated emergent processes depending on multiple levels of physical structures (atoms, cells, tissues, organs, organisms of sufficient complexity – and then a sufficiently rich history of sensory experience to make the formation of interesting ideas supportable).

One of my favorite contemporary philosophers is David Kelley – an Objectivist but one very open to philosophical innovation – without the dogmatic taint that characterized the later years of Ayn Rand and some of her followers today. He has recently released a video entitled “Objective Reality”, where he discusses the idea of the primacy of existence over consciousness. Here, I seek to address the primacy of the physical reality in its connection with several additional considerations – the concepts of essences and qualia, as well as the implications of the primacy of the physical reality for human agency in the pursuit of life and individual flourishing.

Essences

Some ontological idealists – proponents of the primacy of ideas – will claim that the essence of an entity exists outside of that entity, in a separate realm of “immaterial” ideas akin to Plato’s forms. On the contrary, on essences, I am of an Aristotelian persuasion that the essence of a thing is part of that very thing; it is the sum of the qualities of an entity, without which that entity could not have been what it is. The essences do not exist apart from any thing – but rather any thing of a particular sort that exists has the essence which defines it as that thing – along with perhaps some other incidental qualities which are not constitutive to it being that thing.

For instance, a chair may be painted blue or green or any other color, and it may have three legs instead of four, and it may have some dents in it – but it would still be a chair. But if all chairs were destroyed, and no one remembered what a chair was, there would be no ideal Platonic form of the chair floating out there somewhere. In that sense, I differ from the idealists’ characterization of essences as “immaterial”. Rather, an essence always characterizes a material entity or process performed by material entities.

Qualia

Qualia are an individual’s subjective, conscious experiences of reality – for instance, how an individual perceives the color red or the sound of a note played on an instrument. But qualia, too, have a material grounding. As a physicalist, I understand qualia to be the result of physical processes within the body and brain that generate certain sensory perceptions of the world. It follows that different qualia can only be generated if one’s organism has different physical components.

A bat, a fly, or a whale would certainly experience the same external reality differently from a human. Most humans (the ones whose sense organs are not damaged or characterized by genetic defects) have the same essential perceptual structures and so, if placed within the exact same vantage point relative to an object, would perceive it in the same way (with regard to what appears before their senses). After that, of course, what they choose to focus on with their minds and how they choose to interpret what they see (in terms of opinions, associations, decisions regarding what to do next) could differ greatly. The physical perception is objective, but the interpretation of that perception is subjective. But by emulating the sensory organs of another organism (even a bat or a fly), it should be possible to perceive what that organism perceives. I delve into this principle in some detail in Chapter XII of A Rational Cosmology: “The Objectivity of Consciousness”.

Importance of Ontological Realism to Life, Flourishing, and Human Agency

Some opponents of ontological realism might classify it as a “naïve” perspective and claim that those who see physical reality as primary are inappropriately assigning it “ontological privilege”. On the contrary, I strongly hold that this world is the one and that, certainly, events that happen in this world are ontologically privileged for having happened – as opposed to the uncountably many possibilities for what might have happened but did not. Moreover, I see this recognition as an essential starting point for the endeavor which is really at the heart of individual liberty, life extension, transhumanism, and, more generally, a consistent vision of humanism and morality: the preservation of the individual – of all individuals who have not committed irreparable wrongs – from physical demise.

I am not an adherent of the “many worlds” interpretation of quantum mechanics, which some may posit in opposition to my view of the primacy of the single physical reality which we directly experience and inhabit. Indeed, to me, it does not appear that quantum mechanics has a valid philosophical interpretation at all (at least not until some extremely rational and patient philosopher delves into it and tries to puzzle it out); rather, it is a set of equations that is reasonably predictive of the behavior of subatomic particles (sometimes) through a series of probabilistic models. Perhaps in part due to my work in another highly probability-driven area – actuarial science – my experience informs me that probabilistic models are at best only useful approximations of phenomena that may not yet be accessible to us in other ways, and a substantial fraction of the time the models are wildly wrong anyway. As for the very concept of randomness itself, it is a useful epistemological idea, but not a valid metaphysical one, as I explain in my essay “Putting Randomness in Its Place“.

In my view, the past is irreversible, and it happened in the one particular way it happened. The future is full of potential, because it has not happened yet, and the emergent property of human volition enables it to happen in a multitude of ways, depending on the paths we choose. In a poetic sense, it could be said that many worlds unfold before us, but with every passing moment, we pick one of them and that world becomes the one irreversibly, while the others are not retained anywhere. Not only is this understanding a necessary prerequisite for the concept of moral responsibility (our actions have consequences in bringing about certain outcomes, for which we can be credited or faulted, rewarded or punished), but it is also necessary as a foundation for the life-extension premise itself.

If there were infinitely many possible universes, where each of us could have died or not died at every possible instant, then in some of those hypothetical universes, we would have all already been beneficiaries of indefinite life extension. Imagine a universe where humanity was lucky and avoided all of the wars, tyrannies, epidemics, and superstitions that plagued our history and, as a result, was able to progress so rapidly that indefinite longevity would have been already known to the ancient Greeks! This would make for fascinating fiction, and I readily admit to enjoying the occasional retrospective “What if?” contemplation – e.g., what if the Jacobins had not taken over during the French Revolution, or what if Otto von Bismarck had never come to power in Germany, or what if the attacks of September 11, 2001 (a major setback for human progress, largely due to the reactionary violation of civil liberties by Western governments) had never happened? Unfortunately, from an ontological perspective, I do not have that luxury of rewriting the past.  As for the future, it can only be written through actions that affect the physical world, but any tools we can create to help us do this would be welcome.

This is certainly not the best of all possible worlds (a point amply demonstrated in one of my favorite works, Voltaire’s Candide), but it is the world we find ourselves in, through a variety of historical accidents, path-dependencies, and our own prior choices and their foreseen and unforeseen repercussions. But this is indeed our starting point when it comes to any future action, and the choice each of us ultimately faces is whether (i) to become a passive victim of the “larger forces” in this world (to conform or “adapt”, as many people like to call it), (ii) to create an alternate world using imagination and subjective experience only, or (iii) to physically alter this world to fit the parameters of a more just, happy, safe, and prosperous existence – a task to which only we are suited (since there is no cosmic justice or higher power). It should be clear by now that I strongly favor the third option. We should, through our physical deeds, harness the laws of nature to create the world we would wish to inhabit.

Individualism, Objective Reality, and Open-Ended Knowledge – Video by G. Stolyarov II

Individualism, Objective Reality, and Open-Ended Knowledge – Video by G. Stolyarov II

Mr. Stolyarov explains how an objective reality governed by physical laws is compatible with individual self-determination and indeed is required for individuals to meaningfully expand their lives and develop their unique identities.

Reference

– “Individualism, Objective Reality, and Open-Ended Knowledge” – Post by G. Stolyarov II

Feedback Loops and Individual Self-Determination – Article by G. Stolyarov II

Feedback Loops and Individual Self-Determination – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
September 15, 2013
******************************

I have always been fond of the concept of feedback loops, and it is indeed the case that much of humankind’s progress, and the progress of a given individual, can be thought of as a positive feedback loop. In the technology/reason interaction, human reason leads to the creation of technology, which empowers human reason and raises rational thinking to new heights, which enables still further technology, and so on. This, I think, is a good way of understanding why technological progress is not just linear, but exponential; the rate of progress builds on itself using a positive feedback loop.

Positive Feedback LoopNegative feedback loops also exist, of course. For instance, one eats and feels sated, so one stops eating. One exercises and becomes tired, so one stops exercising. Thomas Malthus’s mistake was to view human economic and technological activity as a negative feedback loop (with the improved life opportunities that technology makes possible defeated in the end by overpopulation and resource scarcity). He did not realize that the population growth made possible by technology is a growth in human reasoning ability (more bright minds out there, including the extreme geniuses who can produce radical, paradigm-shifting breakthroughs), which in turn can result in further technological growth, far outpacing the growth in resource demands caused by increasing population.

***
I do also think that positive feedback loops play a role in the questions surrounding free will and determinism. For instance, the growth trajectory of an individual – the process of intellectual empowerment and skill acquisition – is a positive feedback loop. By learning a skill and doing it well, a person feels better about his situation and becomes more motivated to make further progress in the skill. How does it start? This, I think, is where the substance of the free-will/determinism debate has historically led people to be at odds. In my view, free will plays a crucial role, especially at the beginning of a chain of undertakings, in the individual’s choice to focus on a particular subset of reality – certain entities about which one would like to know more, or certain projects one would want to pursue further.
***
Generally, the choice to focus or not is always under an individual’s control under normal conditions of the brain and body (e.g., adequate rest, lack of physical pain, freedom from pressing demands on one’s time). A young child who chooses to focus on productive, mind-enhancing endeavors essentially sets himself up for a virtuous positive feedback loop that continues throughout life. The first instance of such focus could make a very subtle difference, compared to a child who chooses not to focus, and the other child could possibly catch up by choosing to focus later, but an accumulation of subtle differences in individual decisions could result in very different trajectories due to path-dependencies in history and in individual lives. The good news for all of us is that the decision to focus is always there; as one gets older and the set of possible opportunities expands, the harder decision becomes on what to focus out of a myriad of possibly worthwhile endeavors.
***
This understanding integrates well with the portrayal of free will as compatible with an underlying entirely physical nature of the mind. There is undeniably an aspect of the chemistry of the brain that results in human focus and enables the choice to focus. Yet this kind of physical determination is the same as self-determination or free will, if you will. My physical mind is the same as me, so if it is chemically configured to focus (by me), then this is equivalent to me making the choice to focus, which is how the virtuous cycle of skill acquisition leading to motivation leading to skill acquisition begins.
***
In general, in these kinds of recursive phenomena, it may be possible to legitimately answer the question of what came first if one considers not only the types of phenomena (A leading to B leading to A, etc.), but also qualitative and quantitative distinctions among each instance of the same type of phenomenon (e.g., a small amount of A leading to a little bit of B, leading to somewhat more of A with a slightly different flavor, leading to radically more of B, which opens up entirely new prospects for future feedback loops). We see this sort of development when it comes to the evolution of life forms, of technologies, and of entire human societies. If traced backward chronologically, each of these chains of development will be seen to contain many variations of similar types of phenomena, but also clear beginnings for each sequence of feedback loops (e.g., the philosophy of Aristotle paving the way for Aquinas paving the way for the Renaissance paving the way for the Enlightenment paving the way for transhumanism). History does repeat itself, though always with new and surprising variations upon past themes. In the midst of all this recursion, feedback, and path-dependency, we can chart unique, never-quite-previously-tried paths for ourselves.
Individualism, Objective Reality, and Open-Ended Knowledge – Article by G. Stolyarov II

Individualism, Objective Reality, and Open-Ended Knowledge – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
September 11, 2013
******************************

I am an individualist, but not a relativist. While I have no dispute with individuals determining their own meaning and discovering their own significance (indeed, I embrace this), this self-determination needs to occur within an objective physical universe. This is not an optional condition for any of us. The very existence of the individual relies on absolute, immutable physical and biological laws that can be utilized to give shape to the individual’s desires, but that cannot be ignored or wished away. This is why we cannot simply choose to live indefinitely and have this outcome occur. We need to develop technologies that would use the laws of nature to bring indefinite longevity about.

In other words, I am an ontological absolutist who sees wisdom in Francis Bacon’s famous statement that “Nature, to be commanded, must be obeyed.” Individual choice, discovery, and often the construction of personal identity and meaning are projects that I embrace, but they rely on fundamental objective prerequisites of matter, space, time, and causality. Individuals who wish to shape their lives for the better would be wise to take these prerequisites into account (e.g., by developing technologies that overcome the limitations of unaided biology or un-transformed matter). My view is that a transhumanist ethics necessitates an objective metaphysics and a reason-and-evidence-driven epistemology.

This does not, however, preclude an open-endedness to human knowledge and scope of generalization about existence. Even though an absolute reality exists and truth can be objectively known, we humans are still so limited and ignorant that we scarcely know a small fraction of what there is to know. Moreover, each of us has a grasp of different aspects of truth, and therefore there is room for valid differences of perspective, as long as they do not explicitly contradict one another. In other words, it is not possible for both A and non-A to be true, but if there is a disagreement between a person who asserts A and a person who asserts B, it is possible for both A and B to be true, as long as A and B are logically reconcilable. A dogmatic paradigm would tend to erroneously classify too much of the realm of ideas as non-A, if A is true, and hence would falsely reject some valid insights.

These insights illustrate the compatibility of objective physical and biological laws (physicalism) with individual self-determination (volition or free will). There is a similar relationship between ontology and ethics. An objective ontology (based on immutable natural laws) is needed as a foundation for an individualistic ethics of open-ended self-improvement and ceaseless progress.

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Intimations of Imitations: Visions of Cellular Prosthesis and Functionally Restorative Medicine – Article by Franco Cortese

Intimations of Imitations: Visions of Cellular Prosthesis and Functionally Restorative Medicine – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 23, 2013
******************************

In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion-Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.

In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e., ion-channels, ion-pumps, and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation, if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e., lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension, however, and the field is, to my knowledge, yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purposes of indefinite longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.

While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 1970s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.

The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity, and permeability throughout the 1980s, 1990s, and 2000s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and to a lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].

Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpose of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to — rather than replace existing biological neurons — become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion in general had already been conceived (and successfully created/experimentally verified, though presumably not integrated in vivo).

The field of Functionally Restorative Medicine (and the orphan sub-field of whole-brain gradual-substrate replacement, or “physically embodied” brain-emulation, if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels, and ion pumps) by MEMS (micro-electrocal-mechanical systems) or more likely NEMS (nano-electro-mechanical systems).

The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g., bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e., dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological, and experimental infrastructures developed for the fields of Synthetic Ion Channels and Ion-Channel/Artificial-Membrane Reconstitution can be utilized for the purpose of (a) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional and operational modality to their normal biological counterparts) and/or (b) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.

Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.

A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.

For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.

First Survey

Unimolecular ion channels:

Examples include (a) synthetic ion channels with oligocrown ionophores, [5] (b) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and (c) modified varieties of the b-helical scaffold of gramicidin A [7].

Barrel-stave supramolecules:

Examples of this general class falling include voltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].

Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:

Examples of this sub-class include synthetic ion channels formed by (a) macrocyclic, branched and linear bolaamphiphiles, and dimeric steroids, [9] and by (b) non-peptide macrocycles, acyclic analogs, and peptide macrocycles (respectively) containing abiotic amino acids [10].

Dimeric steroid staves:

Examples of this sub-class include channels using polydroxylated norcholentriol dimers [11].

p-Oligophenyls as staves in rigid-rod ß-barrels:

Examples of this sub-class include “cylindrical self-assembly of rigid-rod ß-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].

Synthetic polymers:

Examples of this sub-class include synthetic ion channels and pores comprised of (a) polyalanine, (b) polyisocyanates, (c) polyacrylates, [13] formed by (i) ionophoric, (ii) ‘smart’, and (iii) cationic polymers [14]; (d) surface-attached poly(vinyl-n-alkylpyridinium) [15]; (e) cationic oligo-polymers [16], and (f) poly(m-phenylene ethylenes) [17].

Helical b-peptides (used as staves in barrel-stave method):

Examples of this class include cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].

Monomeric steroids:

Examples of this sub-class include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics that may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21], and supramolecular ion channels [22].

Complex minimalist systems:

Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self-assemble into supramolecular pores or exhibit antibiotic activity [25].

Non-peptide macrocycles as hoops:

Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].

Peptide macrocycles as hoops and staves:

Examples of this sub-class include (a) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic ß-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; (b) synthetic carriers, antibiotics (and analogs), and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. Certain macrocycles may act as ß-sheets, possibly as staves of ß-barrel-like pores [29]; (c) bioengineered pores as sensors. Covalent capturing and fragmentations have been observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].

Summary

Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized, and refined.

Second Survey

The following selections [31] illustrate the chemical, structural, and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration, and experimental verification already existent. Permission to use all the following selections and figures was obtained from the author of the source.

There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:

A: Unimolecular macromolecules,
B: Complex barrel-stave,
C: Barrel-rosette,
D: Barrel hoop, and
E: Micellar supramolecules.

Cation Conducting Channels:

UNIMOLECULAR

“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].

“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono– (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation-selective and voltage-dependent” [34].

“Later, Kokube et al. synthesized another channel comprising of resorcinol-based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to form a dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].

“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chains as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].

“A covalently bonded macrotetracycle (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].

“Inorganic derivative using crown ethers have also been synthesized. Hall et al. synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45].

B-STAVES:

“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D– and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].

“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel ß-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].

“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].

“By attaching peptides to the octiphenyl scaffold, a ß-barrel can be formed via self-assembly through the formation of ß-sheet structures between the peptide chains (Figure 1.13)” [53].

“The same scaffold was used by Matile et al. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].

“Attaching the electron-poor naphthalene diimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the complementary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].

MICELLAR

“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].

“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). Figure 1.15 shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].

ANION CONDUCTING CHANNELS:

“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al. (Figure 1.16). Oligoether chains were attached to the primary face of the ß-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I- > Br- > Cl- (following Hofmeister series)” [61].

“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-” [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].

Cellular Prosthesis: Inklings of a New Interdisciplinary Approach

The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.

However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally Restorative Medicine is possible through taking the technologies and techniques involved in constructing, integrating, and experimentally verifying either (a) non-biological analogues of ion-channels and ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and (b) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neocortex and prefrontal cortex, and through iterative procedures of gradual replacement thereby achieving indefinite longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e., design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in vivo.

The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e., the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic, and/or molecular physically embodied) and functional emulation of biological cells.

The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the fields of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in the fields of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e., sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA, and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps, and sections of bilipid membrane) directly.

Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially synthesized but chemically/structurally equivalent biological analogues.

This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally Restorative Medicine. I suggest the designation ‘cellular prosthesis’.

References:

[1] Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.

[2] Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.

[3] Matile, S., Som, A., & Sorde, N. (2004). Recent synthetic ion channels and pores. Tetrahedron, 60(31), 6405–6435. ISSN 0040–4020, 10.1016/j.tet.2004.05.052. Access: http://www.sciencedirect.com/science/article/pii/S0040402004007690:

[4] XIAO, F., (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[5] Ibid., p. 6411.

[6] Ibid., p. 6416.

[7] Ibid., p. 6413.

[8] Ibid., p. 6412.

[9] Ibid., p. 6414.

[10] Ibid., p. 6425.

[11] Ibid., p. 6427.

[12] Ibid., p. 6416.

[13] Ibid., p. 6419.

[14] Ibid.

[15] Ibid.

[16] Ibid., p. 6419.

[17] Ibid.

[18] Ibid., p. 6421.

[19] Ibid., p. 6422.

[20] Ibid.

[21] Ibid.

[22] Ibid.

[23] Ibid., p. 6423.

[24] Ibid.

[25] Ibid.

[26] Ibid., p. 6426.

[27] Ibid.

[28] Ibid., p. 6427.

[29] Ibid., p. 6327.

[30] Ibid., p. 6427.

[31] XIAO, F. (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[32] Ibid., p. 4.

[33] Ibid.

[34] Ibid.

[35] Ibid.

[36] Ibid., p. 7.

[37] Ibid., p. 8.

[38] Ibid., p. 7.

[39] Ibid.

[40] Ibid.

[41] Ibid.

[42] Ibid.

[43] Ibid., p. 8.

[44] Ibid.

[45] Ibid., p. 9.

[46] Ibid.

[47] Ibid.

[48] Ibid., p. 10.

[49] Ibid.

[50] Ibid.

[51] Ibid.

[52] Ibid., p. 11.

[53] Ibid., p. 12.

[54] Ibid.

[55] Ibid.

[56] Ibid.

[57] Ibid.

[58] Ibid., p. 13.

[59] Ibid.

[60] Ibid., p. 14.

[61] Ibid.

[62] Ibid.

[63] Ibid., p. 15.

[64] Ibid.

[65] Ibid.

[66] Cortese, F., (2013). Concepts for Functional Replication of Biological Neurons. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/gradual-neuron-replacement/

[67] Cortese, F., (2013). Gradual Neuron Replacement for the Preservation of Subjective-Continuity. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/gradual-neuron-replacement/

[68] Cortese, F., (2013). Wireless Synapses, Artificial Plasticity, and Neuromodulation. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/wireless-synapses/

[69] Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 2, 2013
******************************
This essay is the ninth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first eight chapters were previously published on The Rational Argumentator under the following titles:
***

The two approaches falling within this class considered thus far are (a) computational models that model the biophysical (e.g., electromagnetic, chemical, and kinetic) operation of the neurons—i.e., the physical processes instantiating their emergent functionality, whether at the scale of tissues, molecules and/or atoms, and anything in between—and (b) abstracted models, a term which designates anything that computationally models the neuron using the (sub-neuron but super-protein-complex) components themselves as the chosen model-scale (whereas the latter uses for its chosen model-scale the scale at which physical processes emergently instantiating those higher-level neuronal components exist, such as the membrane and individual proteins forming the transmembrane protein-complexes), regardless of whether each component is abstracted as a normative-electrical-component analogue (i.e., using circuit diagrams in place of biological schematics, like equating the lipid bilayer membrane with a capacitor connected to a variable battery) or mathematical models in which a relevant component or aspect of the neuron becomes a term (e.g., a variable or constant) in an equation.

It was during the process of trying to formulate different ways of mathematically (and otherwise computationally) modeling neurons or sub-neuron regions that I laid the conceptual embryo of the first new possible basis for subjective-continuity: the notion of operational isomorphism.

A New Approach to Subjective-Continuity Through Substrate Replacement

There are two other approaches to increasing the likelihood of subjective-continuity, each based on the presumption of two possible physical bases for discontinuity, that I explored during this period. Note that these approaches are unrelated to graduality, which has been the main determining factor impacting the likelihood of subjective-continuity considered thus far. The new approaches consist of designing the NRUs so as to retain the respective postulated physical bases for subjective-continuity that exist in the biological brain. Thus they are unrelated to increasing the efficacy of the gradual-replacement procedure itself, instead being related to the design requirements of functional-equivalents used to gradually replace the neurons that maintain immediate subjective-continuity.

Operational Isomorphism

Whereas functionality deals only with the emergent effects or end-product of a given entity or process, operationality deals with the procedural operations performed so as to give rise to those emergent effects. A mathematical model of a neuron might be highly functionally equivalent while failing to be operationally equivalent in most respects. Isomorphism can be considered a measure of “sameness”, but technically means a 1-to-1 correspondence between the elements of two sets (which would correspond with operational isomorphism) or between the sums or products of the elements of two sets (which would correspond with functional isomorphism, using the definition of functionality employed above). Thus, operational isomorphism is the degree with which the sub-components (be they material as in entities or procedural as in processes) of the two larger-scale components, or the operational modalities possessed by each respective collection of sub-components, are equivalent.

To what extent does the brain possess operational isomorphism? It seems to depend on the scale being considered. At the highest scale, different areas of the nervous system are classed as systems (as in functional taxonomies) or regions (as in anatomical taxonomies). At this level the separate regions (i.e., components of a shared scale) differ widely from one another in terms of operational-modality; they process information very differently from the way other components on the same scale process information. If this scale was chosen as the model-scale of our replication-approach and the preceding premise (that the physical basis for subjective-continuity is the degree of operational isomorphism between components at a given scale) is accepted, then we would in such a case have a high probability of replicating functionality, but a low probability of retaining subjective-continuity through gradual replacement. This would be true even if we used the degree of operational isomorphism between separate components as the only determining factor for subjective-continuity, and ignored concerns of graduality (e.g., the scale or rate—or scale-to-rate ratio—at which gradual substrate replacement occurs).

Contrast this to the molecular scale, where the operational modality of each component (being a given molecule) and the procedural rules determining the state-changes of components at this scale are highly isomorphic. The state-changes of a given molecule are determined by molecular and atomic forces. Thus if we use an informational-functionalist approach, choose a molecular scale for our model, and accept the same premises as the first example, we would have a high probability of both replicating functionality and retaining subjective-continuity through gradual replacement because the components (molecules) have a high degree of operational isomorphism.

Note that this is only a requirement for the sub-components instantiating the high-level neural regions/systems that embody our personalities and higher cognitive faculties such as the neocortex — i.e., we wouldn’t have to choose a molecular scale as our model scale (if it proved necessary for the reasons described above) for the whole brain, which would be very computationally intensive.

So at the atomic and molecular scale the brain possesses a high degree of operational isomorphism. On the scale of the individual protein complexes, which collectively form a given sub-neuronal component (e.g., ion channel), components still appear to possess a high degree of operational isomorphism because all state-changes are determined by the rules governing macroscale proteins and protein-complexes (i.e., biochemistry and particularly protein-protein interactions); by virtue of being of the same general constituents (amino acids), the factors determining state-changes at this level are shared by all components at this scale. The scale of individual neuronal components, however, seems to possess a comparatively lesser degree of operational isomorphism. Some ion channels are ligand-gated while others are voltage-gated. Thus, different aspects of physicality (i.e., molecular shape and voltage respectively) form the procedural-rules determining state-changes at this scale. Since there are now two different determining factors at this scale, its degree of operational isomorphism is comparatively less than the protein and protein-complex scale and the molecular scale, both of which appear to have only one governing procedural-rule set. The scale of individual neurons by contrast appears to possess a greater degree of operational isomorphism; every neuron fires according to its threshold value, and sums analog action-potential values into a binary output (i.e., neuron either fires or does not). All individual neurons operate in a highly isomorphic manner. Even though individual neurons of a given type are more operationally isomorphic in relation to each other than with a neuron of another type, all neurons regardless of type still act in a highly isomorphic manner. However, the scale of neuron-clusters and neural-networks, which operate and communicate according to spatiotemporal sequences of firing patterns (action-potential patterns), appears to possess a lesser degree of operational isomorphism compared to individual neurons, because different sequences of firing patterns will mean a different thing to two respective neural clusters or networks. Also note that at this scale the degree of functional isomorphism between components appears to be less than their degree of operational isomorphism—that is, the way each cluster or network operates is more similar in relation to each other than is their actual function (i.e., what they effectively do). And lastly, at the scale of high-level neural regions/systems, components (i.e., neural regions) differ significantly in morphology, in operationality, and in functionality; thus they appear to constitute the scale that possesses the least operational isomorphism.

I will now illustrate the concept of operational isomorphism using the physical-functionalist and the informational-functionalist NRU approaches, respectively, as examples. In terms of the physical-functionalist (i.e., prosthetic neuron) approach, both the passive (i.e., “direct”) and CPU-controlled sub-classes, respectively, are operationally isomorphic. An example of a physical-functionalist NRU that would not possess operational isomorphism is one that uses a passive-physicalist approach for the one type of component (e.g., voltage-gated ion channel) and a CPU-controlled/cyber-physicalist approach [see Part 4 of this series] for another type of component (e.g., ligand-gated ion channel)—on that scale the components act according to different technological and methodological infrastructures, exhibit different operational modalities, and thus appear to possess a low degree of operational isomorphism. Note that the concern is not the degree of operational isomorphism between the functional-replication units and their biological counterparts, but rather with the degree of operational isomorphism between the functional-replication units and other units on the same scale.

Another possibly relevant type of operational isomorphism is the degree of isomorphism between the individual sub-components or procedural-operations (i.e., “steps”) composing a given component, designated here as intra-operational isomorphism. While very similar to the degree of isomorphism for the scale immediately below, this differs from (i.e., is not equivalent to) such a designation in that the sub-components of a given larger component could be functionally isomorphic in relation to each other without being operationally isomorphic in relation to all other components on that scale. The passive sub-approach of the physical-functionalist approach would possess a greater degree of intra-operational isomorphism than would the CPU-controlled/cyber-physicalist sub-approach, because presumably each component would interact with the others (via physically embodied feedback) according to the same technological and methodological infrastructure—be it mechanical, electrical, chemical, or otherwise. The CPU-controlled sub-approach by contrast would possess a lesser degree of intra-operational-isomorphism, because the sensors, CPU, and the electric or electromechanical systems, respectively (the three main sub-components for each singular neuronal component—e.g., an artificial ion channel), operate according to different technological and methodological infrastructures and thus exhibit alternate operational modalities in relation to eachother.

In regard to the informational-functionalist approach, an NRU model that would be operationally isomorphic is one wherein, regardless of the scale used, the type of approach used to model a given component on that scale is as isomorphic with the ones used to model other components on the same scale as is possible. For example, if one uses a mathematical model to simulate spiking regions of the dendritic spine, then one shouldn’t use a non-mathematical (e.g., strict computational-logic) approach to model non-spiking regions of the dendritic spine. Since the number of variations to the informational-functionalist approach is greater than could exist for the physical-functionalist approach, there are more gradations to the degree of operational isomorphism. Using the exact same branches of mathematics to mathematically model the two respective components would incur a greater degree of operational isomorphism than if we used alternate mathematical techniques from different disciplines to model them. Likewise, if we used different computational approaches to model the respective components, then we would have a lesser degree of operational isomorphism. If we emulated some components while merely simulating others, we would have a lesser degree of operational isomorphism than if both were either strictly simulatory or strictly emulatory.

If this premise proves true, it suggests that when picking the scale of our replication-approach (be it physical-functionalist or informational-functionalist), we choose a scale that exhibits operational isomorphism—for example, the molecular scale rather than the scale of high-level neural-regions, and that we don’t use widely dissimilar types of modeling techniques to model one component (e.g., a molecular system) than we do for another component on the same scale.

Note that unlike operational-continuity, the degree of operational isomorphism was not an explicit concept or potential physical basis for subjective-continuity at the time of my working on immortality (i.e., this concept wasn’t yet fully fleshed out in 2010), but rather was formulated in response to going over my notes from this period so as to distill the broad developmental gestalt of my project; though it appears to be somewhat inherent (i.e., appears to be hinted at), it hasn’t been explicitized until relatively recently.

The next chapter describes the rest of my work on technological approaches to techno-immortality in 2010, focusing on a second new approach to subjective-continuity through a gradual-substrate-replacement procedure, and concluding with an overview of the ways my project differs from the other techno-immortalist projects.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 25, 2013
******************************
This essay is the eighth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first seven chapters were previously published on The Rational Argumentator under the following titles:
***

By 2009 I felt the major classes of physicalist-functionalist replication approaches to be largely developed, producing now only potential minor variations in approach and procedure. These developments consisted of contingency plans in the case that some aspect of neuronal operation couldn’t be replicated with alternate, non-biological physical systems and processes, based around the goal of maintaining those biological (or otherwise organic) systems and processes artificially and of integrating them with the processes that could be reproduced artificially.

2009 also saw further developments in the computational approach, where I conceptualized a new sub-division in the larger class of the informational-functionalist (i.e., computational, which encompasses both simulation and emulation) replication approach, which is detailed in the next chapter.

Developments in the Physicalist Approach

During this time I explored mainly varieties of the cybernetic-physical functionalist approach. This involved the use of replicatory units that preserve certain biological aspects of the neuron while replacing certain others with functionalist replacements, and other NRUs that preserved alternate biological aspects of the neuron while replacing different aspects with functional replacements. The reasoning behind this approach was twofold. The first was that there was a chance, no matter how small, that we might fail to sufficiently replicate some relevant aspect(s) of the neuron either computationally or physically by failing to understand the underlying principles of that particular sub-process/aspect. The second was to have an approach that would work in the event that there was some material aspect that couldn’t be sufficiently replicated via non-biological physically embodied systems (i.e., the normative physical-functionalist approach).

However, these varieties were conceived of in case we couldn’t replicate certain components successfully (i.e., without functional divergence). The chances of preserving subjective-continuity in such circumstances are increased by the number of varieties we have for this class of model (i.e., different arrangements of mechanical replacement components and biological components), because we don’t know which we would fail to functionally replicate.

This class of physical-functionalist model can be usefully considered as electromechanical-biological hybrids, wherein the receptors (i.e., transporter proteins) on the post-synaptic membrane are integrated with the artificial membrane and in coexistence with artificial ion-channels, or wherein the biological membrane is retained while the receptor and ion-channels are replaced with functional equivalents instead. The biological components would be extracted from the existing biological neurons and reintegrated with the artificial membrane. Otherwise they would have to be synthesized via electromechanical systems, such as, but not limited to, the use of chemical stores of amino-acids released in specific sequences to facilitate in vivo protein folding and synthesis, which would then be transported to and integrated with the artificial membrane. This is better than providing stores of pre-synthesized proteins, due to more complexities in storing synthesized proteins without decay or functional degradation over storage-time, and in restoring them from their “stored”, inactive state to a functionally-active state when they were ready for use.

During this time I also explored the possibility of using the neuron’s existing protein-synthesis systems to facilitate the construction and gradual integration of the artificial sections with the existing lipid bilayer membrane. Work in synthetic biology allows us to use viral gene vectors to replace a given cell’s constituent genome—and consequently allowing us to make it manufacture various non-organic substances in replacement of the substances created via its normative protein-synthesis. We could use such techniques to replace the existing protein-synthesis instructions with ones that manufacture and integrate the molecular materials constituting the artificial membrane sections and artificial ion-channels and ion-pumps. Indeed, it may even be a functional necessity to gradually replace a given neuron’s protein-synthesis machinery with protein-synthesis-based machinery for the replacement, integration and maintenance of the non-biological sections’ material, because otherwise those parts of the neuron would still be trying to rebuild each section of lipid bilayer membrane we iteratively remove and replace. This could be problematic, and so for successful gradual replacement of single neurons, a means of gradually switching off and/or replacing portions of the cell’s protein-synthesis systems may be required.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.