Browsed by
Tag: identity

Gennady Stolyarov II Interviewed on “Lev and Jules Break the Rules” – Sowing Discourse, Episode #001

Gennady Stolyarov II Interviewed on “Lev and Jules Break the Rules” – Sowing Discourse, Episode #001

Gennady Stolyarov II
Jules Hamilton
Lev Polyakov


U.S. Transhumanist Party Chairman Gennady Stolyarov II was recently honored to be the first guest ever interviewed on the video channel Lev and Jules Break the Rules with Lev Polyakov and Jules Hamilton. Lev and Jules have produced this skillfully edited video of the conversation, with content references from the conversation inserted directly into the footage. For those who wish to explore broad questions related to technology, transhumanism, culture, economics, politics, philosophy, art, and even connections to popular films and computer games, this is the discussion to watch.

This video was originally posted here. It is mirrored on Mr. Stolyarov’s YouTube channel here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our free Membership Application Form here. It takes less than a minute!

It is republished with permission.

More information about Lev and Jules Break the Rules:
Patreon
Minds
Instagram
Twitter
Facebook

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

G. Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

An Analysis of Ethical Issues in the Film “Gattaca” (2004) – Article by G. Stolyarov II

An Analysis of Ethical Issues in the Film “Gattaca” (2004) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 4, 2014
******************************
Note from the Author: This essay was originally written in 2004 and published on Associated Content (subsequently, Yahoo! Voices) in 2007. It earned over 40,000 page views since, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time. 
***
~ G. Stolyarov II, July 4, 2014
***

The central ethical dilemma of the 1997 Andrew Niccol film Gattaca concerns the manner in which an individual ought to be judged. Should it be by the composition of his genome, present at birth, or by the attributes of personality and ambition that are chosen by that individual? In the futuristic society depicted in the film, genetic engineering allows for the elimination of almost all physical defects in newborns, whose bodily characteristics later render them far more favorable candidates for employment than those whose genes had not been enhanced in this manner. Eventually, interviews are conducted not to assess an applicant’s character and determination, but his genetic code. The even more fundamental question that arises from this is, “What determines the essential identity of a human being? Is it his genetic code, or is it something else?”

Vincent is a child born in the obsolete manner, and thus his genome is riddled with “errors,” from which high “probabilities” of him obtaining certain ailments later in life are inferred. Nevertheless, these are probabilities only, and Vincent is healthy, athletic, and yearns to one day explore outer space. Unfortunately, he is denied admission to Gattaca, the facility of the space program, on the basis of his genome alone. Despite his splendid knowledge of astronomy and navigation, the best test scores in the world will not admit him.

Yet Vincent is not content with the position of janitor, and “borrows” the identity of Jerome Morrow, a paralyzed individual with a superb genome. A series of complex procedures is designed to allow Vincent to pass all the substance tests and gain admission to Gattaca under the name of Jerome Morrow. Jerome may have the genetic endowment to enter Gattaca, but he lacks the will, and thus harbors no objection to Vincent taking his place. Another employee at Gattaca, Irene, had also been born in the obsolete manner, but her genome is adequate enough for her to be permitted to work on minor tasks. She suspects that Vincent may be connected with the recent murder of the mission director, who was about to uncover Vincent’s actual identity. In the process, however, she enters a relationship with Vincent, and faces the dilemma of whether or not to disclose his identity to the police.

Vincent’s brother, Anton, is the inspector heading the murder investigation. Throughout his childhood, he sought to demonstrate his superiority to Vincent by virtue of his enhanced genetic endowment. Nevertheless, Vincent had once saved Anton’s life in a game of “chicken,” where Anton’s body had failed him, while Vincent’s was able to endure. Anton wishes to maintain the image of his superiority and is immensely jealous of Vincent’s successful aspiring to the heights of outer space.

Vincent attempts to deceive the security systems at Gattaca by pretending to be Jerome Morrow and presenting samples of bodily substances prepared by Jerome for various examinations. In the meantime, he studies and works diligently, and his level of performance at Gattaca is precisely what is anticipated of a man with a privileged genetic endowment. Thus, only a few people ever come to suspect that Vincent is a “borrowed ladder,” a fabricator of his genetic identity. Vincent is set to depart on a mission into space, after which his individual merits will overrule his genome conclusively, and he will no longer be subject to genetic security tests. However, the murder of the mission director subjects Gattaca to a series of extremely intrusive investigations by police that threaten to uncover Vincent’s true identity and even arrest him for murder, even though Vincent is innocent of the crime.

Vincent’s tenacity and resolve to enter space ultimately allow him to successfully endure turbulent times. Despite a multitude of close calls, he is saved from universal detection, though he is recognized by Irene, whose personal admiration for Vincent overrides the fact that Vincent had broken the law. Anton also recognizes his brother and threatens to arrest him, still acting on his childhood jealousy. However, a final game of “chicken,” in which Vincent saves Anton once again, proves that Vincent’s defiance of the inferior expectations imposed upon him by his society has enabled him to exceed in his abilities individuals like Anton, whom societal expectations had favored. The doctor at Gattaca recognized Vincent’s individual merits and decided to fabricate a “valid” test for him on the day of the launch. To people like the doctor, Vincent has proved his worth and his genetic composition has become irrelevant.

Vincent’s course of action, though in violation of the law, was not in violation of moral principles. Vincent had harmed no one by his attempt to pursue his ambitions at Gattaca and in outer space; thus, his action exhibited the principle of nonmaleficence. His exploratory endeavors are of immense benefit to both himself and the level of knowledge available to the general society; thus, his action fulfills the principle of beneficence. His action was an exercise of his individual autonomy and right to self-determination in the face of a hierarchical culture that repressed these rights. Finally, his action attempted to allow Vincent to experience the just treatment that he deserved on the basis of his merits, and which, absent the action, would have been denied to him on the basis of his genome. Thus, the action fulfills the principle of justice.

A rational society would have resolved the ethical dilemma of the proper criterion of judging an individual by eschewing determinism altogether. Vincent should not have initially been seen solely as the product of his genes, for a man is born tabula rasa where the mind is concerned. The human genome determines only the structural mechanisms that exist in the individual organism. How the individual employs those mechanisms is a matter of pure willpower and determination. Few genes can conclusively determine an individual’s fate; a high probability of heart disease can be reduced by strenuous exercise, of the sort Vincent engaged in. A low “intelligence quotient” is no obstacle to an individual reading, comprehending, and applying immense volumes of material, so long as the interest to do so is clearly seen.

Vincent should have been admitted to Gattaca on the basis of a one-on-one interview process that tested his knowledge, physical skill, and enthusiasm for space exploration, for, without these, the finest genetic endowment can still produce a Jerome Morrow, a man who is paralyzed not only in body (by an accident) but in mind (by lack of ambition). The theory that fits this solution is principlism. Vincent is not harming anyone by pursuing his own favorite field of exploration; thus, the action is nonmaleficent. He is amply benefiting himself and others through his skilled endeavors in the realm of space exploration; thus, the action is beneficent. He is allowed to exercise his individual autonomy and pursue his goals, regardless of societal prejudices. And, finally, he is entitled to the same freedom of action and opportunity that other members of his society (the genetically engineered individuals) possess, which passes the test for comparative justice.

Individualism, Objective Reality, and Open-Ended Knowledge – Video by G. Stolyarov II

Individualism, Objective Reality, and Open-Ended Knowledge – Video by G. Stolyarov II

Mr. Stolyarov explains how an objective reality governed by physical laws is compatible with individual self-determination and indeed is required for individuals to meaningfully expand their lives and develop their unique identities.

Reference

– “Individualism, Objective Reality, and Open-Ended Knowledge” – Post by G. Stolyarov II

SENS or Cryonics?: My Answer to a Hypothetical Choice – Video by G. Stolyarov II

SENS or Cryonics?: My Answer to a Hypothetical Choice – Video by G. Stolyarov II

If Mr. Stolyarov had $1 billion to donate to life extension, would he donate it to SENS (Strategies for Engineered Negligible Senescence) or cryonics? Find out his answer.

References:
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Video by G. Stolyarov II
SENS Research Foundation Website
– “Kim Suozzi Cryogenically Preserved After Battle With Brain Cancer” – Huffington Post – January 22, 2013

Individualism, Objective Reality, and Open-Ended Knowledge – Article by G. Stolyarov II

Individualism, Objective Reality, and Open-Ended Knowledge – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
September 11, 2013
******************************

I am an individualist, but not a relativist. While I have no dispute with individuals determining their own meaning and discovering their own significance (indeed, I embrace this), this self-determination needs to occur within an objective physical universe. This is not an optional condition for any of us. The very existence of the individual relies on absolute, immutable physical and biological laws that can be utilized to give shape to the individual’s desires, but that cannot be ignored or wished away. This is why we cannot simply choose to live indefinitely and have this outcome occur. We need to develop technologies that would use the laws of nature to bring indefinite longevity about.

In other words, I am an ontological absolutist who sees wisdom in Francis Bacon’s famous statement that “Nature, to be commanded, must be obeyed.” Individual choice, discovery, and often the construction of personal identity and meaning are projects that I embrace, but they rely on fundamental objective prerequisites of matter, space, time, and causality. Individuals who wish to shape their lives for the better would be wise to take these prerequisites into account (e.g., by developing technologies that overcome the limitations of unaided biology or un-transformed matter). My view is that a transhumanist ethics necessitates an objective metaphysics and a reason-and-evidence-driven epistemology.

This does not, however, preclude an open-endedness to human knowledge and scope of generalization about existence. Even though an absolute reality exists and truth can be objectively known, we humans are still so limited and ignorant that we scarcely know a small fraction of what there is to know. Moreover, each of us has a grasp of different aspects of truth, and therefore there is room for valid differences of perspective, as long as they do not explicitly contradict one another. In other words, it is not possible for both A and non-A to be true, but if there is a disagreement between a person who asserts A and a person who asserts B, it is possible for both A and B to be true, as long as A and B are logically reconcilable. A dogmatic paradigm would tend to erroneously classify too much of the realm of ideas as non-A, if A is true, and hence would falsely reject some valid insights.

These insights illustrate the compatibility of objective physical and biological laws (physicalism) with individual self-determination (volition or free will). There is a similar relationship between ontology and ethics. An objective ontology (based on immutable natural laws) is needed as a foundation for an individualistic ethics of open-ended self-improvement and ceaseless progress.

Free Will and Self-Causation – Article by Leonid Fainberg

Free Will and Self-Causation – Article by Leonid Fainberg

The New Renaissance Hat
Leonid Fainberg
August 26, 2013
******************************

Homo liber nulla de re minus quam de morte cogitat; et ejus sapientia non mortis sed vitae meditatio est.

~ SPINOZA’S Ethics, Pt. IV, Prop. 67

(There is nothing over which a free man ponders less than death; his wisdom is, to meditate not on death but on life.)

Reductionism and its corollary, Determinism, are deeply enrooted in the fabric of the modern mainstream philosophy. These are leftovers of the Cartesian mind-body dichotomy. Instead of rejecting this notion altogether, Reductionists simply choose the other, bodily side of this loaded coin. Now they have reached a blind alley in their attempts to explain life in terms of lifelessness. As Hans Jonas observed:

“Vitalistic monism is replaced by mechanistic monism, in whose rules of evidence the standard of life is exchanged for that of death.” (The Phenomenon of Life, pg. 11).

Since Mind and Free Will are biological phenomena which cannot be explained in terms of non-life, Reductionists are necessarily Determinists. Hard Determinists reject the notion of Free Will (and therefore Mind) completely; soft Determinists and Compatibilists are still trying to find explanation of Free Will in the indeterminate realm of Quantum mechanics, in stochastic rules of Chaos theory, or in the mystical realm of Tao. I maintain that Free Will is a manifestation on the conceptual level of the very essential property of life itself, which is biological self-causation.

“Freedom must denote an objectively discernible mode of being, i.e., a manner of executing existence, distinctive of the organic per se.” (Ibid, pg. 3).

The Law of Causality is the Law of Identity applied to action (Ayn Rand). Since biological action is a self-generated, goal-orientated response (SIGOR) to environmental challenges, such an action cannot be predetermined by any antecedent cause. On the contrary, any antecedent or proximate action could be only detrimental to the healthy living process.

As Rosen put it:

“[I]t is perfectly respectable to talk about a category of final causation and to a component as the effect of its final cause… In this sense, then, a component is entailed by its function… a material system is an organism if and only if it is closed to efficient causation.” (Life Itself, pg. 135).

In other words the process of biological causation is a process in which a final cause (a goal), becomes its efficient cause.  Traditionally, the notion of the final cause associated with Aristotle’s primary mover, some divine, supernatural source.  However, this is not a case of mysticism, far from it.

Life emerged as a result of self-organization of abiotic elements. How that happened we don’t know yet. However, some researchers think that this is a thermodynamically inevitable event.

“Life is universally understood to require a source of free energy and mechanisms with which to harness it. Remarkably, the converse may also be true: the continuous generation of sources of free energy by abiotic processes may have forced life into existence as a means to alleviate the buildup of free energy stresses….” (Energy Flow and the Organization of Life. Harold Morowitz and Eric Smith, 2006).

But does this mean that life is a determined process? I don’t think so. Life is an emergent phenomenon, and as such it possesses new properties which its precursors don’t have. In their book Biological Self-organization Camazine et al. (2001: 8) define self-organization “as a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. The system has properties that are emergent, if they are not intrinsically found within any of the parts, and exist only at a higher level of description….’’

From this definition it follows that (1) a process of self-organization doesn’t have an antecedent cause; and (2) emergent properties of such a system are different from the properties of its components and therefore cannot be explained by means of reductionism. In other words, properties of such a system are not defined by antecedent cause. Life is a self-organizing, self-regulated material structure which is able to produce self-generated, goal-orientated action when the goal is preservation and betterment of itself. This new emergent identity which applied to biotic action defines new type of causation: self-causation.

Harry Binswanger observed that “All levels of living action, from a cell’s protein-synthesis to a scientist’s investigations, are goal-directed. In vegetative action, past instances of the ‘final cause’ act as ‘efficient cause.’” (1992).

This is the mechanism of self-causation. Now it is clear why any action imposed on the organism and driven by antecedent cause could be only detrimental: it inevitably would interfere with the self-generated action of the organism. Each and every organism is its own primary mover. In the low organisms the degree of freedom of action is limited by their genetic setup. However, even low organisms, like fungi for example, able to overcome this genetic determinism.

“During a critical period, variability is generated by the fact that a system becomes conditioned by all the factors influencing the spontaneous emergence of a symmetry-breaking event. In such a context variability does not reflect an environmental perturbation in expression of a pre-existing (genetic) program of development…It is expression of a process of individuation.” (Trewavas, 1999)

SIGOR is limited by an organism’s perceptual ability and capacity to process the sensory input. The process of evolution is a process of development of these qualities, since the organism’s survival depends on them. More freedom of action means better chances of survival. The end product of such a process is Free Will and self-awareness – that is, human mind. Free Will therefore is an expression of self-causation on conceptual level.

As Rodrigues observed: “Cerebral representations result from self-emergence of networks of interactions between modules of neurons stimulated by sensorial perception.” (Rodriguez at al., 1999)

The human abilities to choose goals consciously and to act rationally in order to achieve them lead us from biology to ethics. But the origin of these abilities lies in the very fundamental property of any living being. This property is self-generated, goal-orientated action driven by self-causation. Any attempt to reduce this property to the set of biochemical reactions or to undetermined behavior of subatomic particles is doomed to fail. Ayn Rand profoundly summarized the meaning of life in We, The Living: “I know what I want, and to know HOW TO WANT – isn’t it life itself?”

Leonid Fainberg is an Objectivist philosopher and contributor to The Rational Argumentator.

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

How Can I Live Forever?: What Does and Does Not Preserve the Self – Video by G. Stolyarov II

How Can I Live Forever?: What Does and Does Not Preserve the Self – Video by G. Stolyarov II

When we seek indefinite life, what is it that we are fundamentally seeking to preserve? Mr. Stolyarov discusses what is necessary for the preservation of “I-ness” – an individual’s direct vantage point: the thoughts and sensations of a person as that person experiences them directly.

Once you are finished with this video, you can take a quiz and earn the “I-ness” Awareness Open Badge.

Reference

– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf