Browsed by
Tag: mind

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

G. Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Ayn Rand’s Heroic Life – Article by Jeffrey A. Tucker

Ayn Rand’s Heroic Life – Article by Jeffrey A. Tucker

The New Renaissance HatJeffrey A. Tucker
******************************

I first encountered Ayn Rand through her nonfiction. This was when I was a junior in high school, and I’m pretty sure it was my first big encounter with big ideas. It changed me. Like millions of others who read her, I developed a consciousness that what I thought – the ideas I held in my mind – mattered for what kind of life I would live. And it mattered for everyone else too; the kind of world we live in is an extension of what we believe about what life can mean.

People today argue over her legacy and influence – taking apart the finer points of her ethics, metaphysics, epistemology. This is all fine but it can be a distraction from her larger message about the moral integrity and creative capacity of the individual human mind. In so many ways, it was this vision that gave the postwar freedom movement what it needed most: a driving moral passion to win. This, more than any technical achievements in economic theory or didactic rightness over public-policy solutions, is what gave the movement the will to overcome the odds.

Often I hear people offer a caveat about Rand. Her works are good. Her life, not so good. Probably this impression comes from public curiosity about various personal foibles and issues that became the subject of gossip, as well as the extreme factionalism that afflicted the movement she inspired.

This is far too narrow a view. In fact, she lived a remarkably heroic life. Had she acquiesced to the life fate seemed to have chosen for her, she would have died young, poor, and forgotten. Instead, she had the determination to live free. She left Russia, immigrated to the United States, made her way to Hollywood, and worked and worked until she built a real career. This one woman – with no advantages and plenty of disadvantages – on her own became one of the most influential minds of this twentieth century.

So, yes, her life deserves to be known and celebrated. Few of us today face anything like the barriers she faced. She overcame them and achieved greatness. Let her inspire you too.

Kudos to the Atlas Society for this video:

Pope Francis vs. the Cure of Reason – Article by Edward Hudgins

Pope Francis vs. the Cure of Reason – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
September 27, 2015
******************************

A young girl was recently interviewed on TV about her encounter with Pope Francis on his visit to the United States. She cried with joy as she described how he touched her on the forehead and offered a blessing. Now, she said, she might get the miracle she’s prayed for. Maybe someday she’ll be able to walk.

Who could not be moved by a crippled child who wants to be cured? But what is really wrenching is the fact that this child and so many others look to faith rather than science and reason.

Medical breakthroughs

On the same day the Pope was touching the little girl, a news story was circulating about a breakthrough in prosthetics. A brain implant has restored to a man with a robotic hand his sense of touch.

Another story in recent months documented technology that allows individuals to control their artificial limbs with their thoughts.

Some even express fears that bionic legs in the future could be so good that they will be preferred to the natural ones we’re born with.

The sightless have sought divine intersession to cure blindness since before the time of Jesus. A few days before the Pope toured D.C., a breakthrough was announced that involves applying a light-sensitive protein found in algae to the back of the retinas of eyes to, in effect, replace the rods and cones destroyed by certain diseases. The technique has been successful in mice and human tests are now coming.


This restorative treatment has welcome competition. Last month saw a man receive the first bionic eye implant.

And let’s not forget that deafness is in the process of being vanquished thanks to cochlear implants.

Free markets needed

Free markets, of course, if allowed to operate, will make what are now pricey, experimental medical technologies affordable for most, just as markets have allowed entrepreneurs to create and bring down the prices of computers, smartphones, tablets, Wifi, and all the hardware and software of the information revolution.

Handicapped individuals, like the girl who was so happy the Pope touched her, might have bright futures indeed. But they need to recognize that it is not faith that will make them whole. It is reason.

Human reason needed

It is the power of the human mind, especially in science and engineering, that has brought about the benefits of our modern world. Yet where are the parades, the speeches before Congress, and the celebrations that recognize the sources of such benefits and encourage reason and achievement as foundational values in our culture? Why do so many seek hope in faith and otherworldly miracles when real achievements—“miracles” of the human mind—are all around us? Why do so few understand that training minds and encouraging entrepreneurship is the best way to ensure a healthy, prosperous future? With all the enthusiasm we see for the Pope, where is the enthusiasm for the actual creators and achievers in our world?

Ironically, the Pope, in his economic ignorance, denounces the free market system that could cure that little girl. And he promotes draconian economic restrictions to fight hypothesized global warming, restrictions that would ensure that the poor he says he cares so much about will be with us always. The Pope—and all of us—indeed should empathize with that little girl. But he should be touting reason as the cure. This Jesuit Pope needs to read his Thomas Aquinas!

Those who are enthusiastic about the Pope’s visit because he inspires hope for a better world had better look to the real source of all our blessings: the human mind.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

The Human Rational Faculty and the Necessity of Property Rights (2005) – Article by G. Stolyarov II

The Human Rational Faculty and the Necessity of Property Rights (2005) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 20, 2014
******************************
Note from the Author: This essay was originally written in 2005 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 20, 2014
***
Each individual is, by his fundamental and inextricable identity, a rational being, with a means of accurately identifying and analyzing reality with his mind. The individual’s rational faculty is his sole gateway to knowledge, and the sole means by which he can direct the application of his knowledge to the external world.
***

Nobody else’s activity of any sort can substitute for the individual’s own thinking, just as nobody else’s activity can substitute for an individual’s own digestion. Each individual is also fundamentally a volitional being, and can choose to default on the responsibility of thinking for himself, thereby also choosing to bear the consequences.

However, whatever he chooses, it remains irrefutably true that he still possesses the capacity to be rational. From this capacity it is implied that he ought to be allowed to be rational, i.e., that he has a natural right to use his reason and benefit from the applications thereof.

Nobody should be permitted to intervene with another individual’s use of reason, nor to substitute his reasoning for another’s and force another to agree with or accept the consequences of his reasoning unless the other explicitly consents.

When two individuals come to an agreement, each has used his own reasoning to embrace it. When, however, such a clear, unambiguous agreement is not present, the individual who presumes to place his thoughts in the stead of another’s is committing the initiation of force, which is the opposite of reason.

Since all natural rights are derived from the human capacity to reason, all violations of natural rights are derived from the initiation of force by some individuals against others.

The only manner in which reason can have any concrete, material expression is by means of property, i.e., those material entities which belong to an individual as a consequence of his use of reason. Even the very capacity to reason itself is dependent on property, as the individual mind is a material entity, and, were it not for the concrete biological mechanisms of the brain, there would not be abstract thought.

Thus, to be able to reason, the individual must have a property in his physical mind. In order for his physical mind to function, an individual must also have property in his physical body, since, not only is the mind part of the body but, without the proper functioning of the remainder of the body, the mind would not be able to survive. In summation, the right to the use of one’s reason implies the right to property in oneself and, as a corollary, the right to use one’s reason to determine what shall happen to one’s mind and body.

Free Will and Self-Causation – Article by Leonid Fainberg

Free Will and Self-Causation – Article by Leonid Fainberg

The New Renaissance Hat
Leonid Fainberg
August 26, 2013
******************************

Homo liber nulla de re minus quam de morte cogitat; et ejus sapientia non mortis sed vitae meditatio est.

~ SPINOZA’S Ethics, Pt. IV, Prop. 67

(There is nothing over which a free man ponders less than death; his wisdom is, to meditate not on death but on life.)

Reductionism and its corollary, Determinism, are deeply enrooted in the fabric of the modern mainstream philosophy. These are leftovers of the Cartesian mind-body dichotomy. Instead of rejecting this notion altogether, Reductionists simply choose the other, bodily side of this loaded coin. Now they have reached a blind alley in their attempts to explain life in terms of lifelessness. As Hans Jonas observed:

“Vitalistic monism is replaced by mechanistic monism, in whose rules of evidence the standard of life is exchanged for that of death.” (The Phenomenon of Life, pg. 11).

Since Mind and Free Will are biological phenomena which cannot be explained in terms of non-life, Reductionists are necessarily Determinists. Hard Determinists reject the notion of Free Will (and therefore Mind) completely; soft Determinists and Compatibilists are still trying to find explanation of Free Will in the indeterminate realm of Quantum mechanics, in stochastic rules of Chaos theory, or in the mystical realm of Tao. I maintain that Free Will is a manifestation on the conceptual level of the very essential property of life itself, which is biological self-causation.

“Freedom must denote an objectively discernible mode of being, i.e., a manner of executing existence, distinctive of the organic per se.” (Ibid, pg. 3).

The Law of Causality is the Law of Identity applied to action (Ayn Rand). Since biological action is a self-generated, goal-orientated response (SIGOR) to environmental challenges, such an action cannot be predetermined by any antecedent cause. On the contrary, any antecedent or proximate action could be only detrimental to the healthy living process.

As Rosen put it:

“[I]t is perfectly respectable to talk about a category of final causation and to a component as the effect of its final cause… In this sense, then, a component is entailed by its function… a material system is an organism if and only if it is closed to efficient causation.” (Life Itself, pg. 135).

In other words the process of biological causation is a process in which a final cause (a goal), becomes its efficient cause.  Traditionally, the notion of the final cause associated with Aristotle’s primary mover, some divine, supernatural source.  However, this is not a case of mysticism, far from it.

Life emerged as a result of self-organization of abiotic elements. How that happened we don’t know yet. However, some researchers think that this is a thermodynamically inevitable event.

“Life is universally understood to require a source of free energy and mechanisms with which to harness it. Remarkably, the converse may also be true: the continuous generation of sources of free energy by abiotic processes may have forced life into existence as a means to alleviate the buildup of free energy stresses….” (Energy Flow and the Organization of Life. Harold Morowitz and Eric Smith, 2006).

But does this mean that life is a determined process? I don’t think so. Life is an emergent phenomenon, and as such it possesses new properties which its precursors don’t have. In their book Biological Self-organization Camazine et al. (2001: 8) define self-organization “as a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. The system has properties that are emergent, if they are not intrinsically found within any of the parts, and exist only at a higher level of description….’’

From this definition it follows that (1) a process of self-organization doesn’t have an antecedent cause; and (2) emergent properties of such a system are different from the properties of its components and therefore cannot be explained by means of reductionism. In other words, properties of such a system are not defined by antecedent cause. Life is a self-organizing, self-regulated material structure which is able to produce self-generated, goal-orientated action when the goal is preservation and betterment of itself. This new emergent identity which applied to biotic action defines new type of causation: self-causation.

Harry Binswanger observed that “All levels of living action, from a cell’s protein-synthesis to a scientist’s investigations, are goal-directed. In vegetative action, past instances of the ‘final cause’ act as ‘efficient cause.’” (1992).

This is the mechanism of self-causation. Now it is clear why any action imposed on the organism and driven by antecedent cause could be only detrimental: it inevitably would interfere with the self-generated action of the organism. Each and every organism is its own primary mover. In the low organisms the degree of freedom of action is limited by their genetic setup. However, even low organisms, like fungi for example, able to overcome this genetic determinism.

“During a critical period, variability is generated by the fact that a system becomes conditioned by all the factors influencing the spontaneous emergence of a symmetry-breaking event. In such a context variability does not reflect an environmental perturbation in expression of a pre-existing (genetic) program of development…It is expression of a process of individuation.” (Trewavas, 1999)

SIGOR is limited by an organism’s perceptual ability and capacity to process the sensory input. The process of evolution is a process of development of these qualities, since the organism’s survival depends on them. More freedom of action means better chances of survival. The end product of such a process is Free Will and self-awareness – that is, human mind. Free Will therefore is an expression of self-causation on conceptual level.

As Rodrigues observed: “Cerebral representations result from self-emergence of networks of interactions between modules of neurons stimulated by sensorial perception.” (Rodriguez at al., 1999)

The human abilities to choose goals consciously and to act rationally in order to achieve them lead us from biology to ethics. But the origin of these abilities lies in the very fundamental property of any living being. This property is self-generated, goal-orientated action driven by self-causation. Any attempt to reduce this property to the set of biochemical reactions or to undetermined behavior of subatomic particles is doomed to fail. Ayn Rand profoundly summarized the meaning of life in We, The Living: “I know what I want, and to know HOW TO WANT – isn’t it life itself?”

Leonid Fainberg is an Objectivist philosopher and contributor to The Rational Argumentator.

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

We Seek Not to Become Machines, But to Keep Up with Them – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
July 14, 2013
******************************
This article attempts to clarify four areas within the movement of Substrate-Independent Minds and the discipline of Whole-Brain Emulation that are particularly ripe for ready-hand misnomers and misconceptions.
***

Substrate-Independence 101:

  • Substrate-Independence:
    It is Substrate-Independence for Mind in general, but not any specific mind in particular.
  • The Term “Uploading” Misconstrues More than it Clarifies:
    Once WBE is experimentally-verified, we won’t be using conventional or general-purpose computers like our desktop PCs to emulate real, specific persons.
  • The Computability of the Mind:
    This concept has nothing to do with the brain operating like a computer. The liver is just as computable as the brain; their difference is one of computational intensity, not category.
  • We Don’t Want to Become The Machines – We Want to Keep Up With Them!:
    SIM & WBE are sciences of life-extension first and foremost. It is not out of sheer technophilia, contemptuous “contempt of the flesh”, or wanton want of machinedom that proponents of Uploading support it. It is, for many, because we fear that Recursively Self-Modifying AI will implement an intelligence explosion before Humanity has a chance to come along for the ride. The creation of any one entity superintelligent to the rest constitutes both an existential risk and an antithetical affront to Man, whose sole central and incessant essence is to make himself to an increasingly greater degree, and not to have some artificial god do it for him or tell him how to do it.
Substrate-Independence
***

The term “substrate-independence” denotes the philosophical thesis of functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series of component parts or procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate-Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general-purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be reinstantiated upon arrival.

The term “substrate-independent minds” should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration, or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity among the constitutive components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is much talk in the philosophical and futurist circles – where Substrate-Independent Minds are a familiar topic and a common point of discussion – on how the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general-purpose (i.e., standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on pause in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided the other body were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e., portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical/mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, what is sought is substrate-independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity).

The Term “Uploading” Misconstrues More Than It Clarifies

The term “Mind Uploading” has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE more fantastic and incredible – and thus improbable – than it actually is. I don’t think anyone seriously speculating about WBE would entertain such a notion.

Another potential misinterpretation particularly likely to result from the term “Mind Uploading” is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept. The notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward. In the context of destructive uploading, it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or, more generally, biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e., communicating via electromagnetic signal), but this simply introduces a potential lag-time that may then have to be overcome by faster sensors, faster actuators, or a faster emulation. It is likely that the lag-time would be negligible, especially if it was located in a convenient module external to the body but “on it” at all times, to minimize transmission delays increasing as one gets farther away from such an external computational device. This would also likely necessitate additional computation to model the necessary changes to transmission speed in response to how far away the person is.  Otherwise, signals that are meant to arrive at a given time could arrive too soon or too late, thereby disrupting functionality. However, placing the computational substrate in vivo obviates these potential logistical obstacles.

This notion is I think not brought into the discussion enough. It is an intuitively obvious notion if you’ve thought a great deal about Substrate-Independen -Minds and frequented discussions on Mind Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least intuitively or initially incredible and infeasible before people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e., wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons) without fatally tripping upon such seeming logistical impasses, as in the example above. The connotations created by the term I think to some extent make it seem so fantastic (as in the overly simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

The Computability of the Mind

Another common misconception is that the feasibility of Mind Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.

Machines don’t have to be as simple or as un-adaptable and invariant as the are today. The universe itself is a machine. In other words, either everything is a machine or nothing is.

This misunderstanding also makes people think that advocates and supporters of Mind Uploading are claiming that the mind is reducible to basic or simple autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally ingrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind Uploading that are raising these claims, but science itself – and for hundreds of years, I might add. Man’s privileged and physically irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notions of free will are still associated with a non-physical, untouchably metaphysical human soul (i.e., mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically tied notions of free will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations (I hesitate to use the word “mechanisms” for its likewise misconnotative conceptual associations) does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, to really appreciate it, and to change it (e.g., improve upon it) in any way. Physicalism and scientific materialism are needed if we are to ever see how it is done and to ever hope to change it for the better. Figuring out how things work is one of Man’s highest merits – and there is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being, and mind result from the convergence of singly simple systems and processes makes the mind’s emergence from such simple convergence all the more astounding, amazing, and rare, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism connote), then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of the mind’s underlying mechanisms makes the mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet unheralded.

Now that we have addressed such potentially harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical systems – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively accurate computational models (i.e., simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs, and get a resulting output that approximately matches the real-world (i.e., physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and indignifying connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

We Want Not To Become Machines, But To Keep Up With Them!

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind Uploading and Substrate-Independent Minds, meat is machinery anyways. In other words there is no real (i.e., legitimate) ontological distinction between human minds and machines to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me (and I think for many other proponents and supporters of Mind Uploading) a means of life extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to an ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity, or to eschewing our humanity in a fit of contempt of the flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind Uploading far too often.

The notion of gradual uploading is implicitly a means of life extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology, or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as enabling each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively self-improving AI, biased towards being likely to recursively self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I would favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely.

Most people who wish to implement or accelerate an intelligence explosion a la I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying superintelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity. The old God of yore is finally going out of fashion, one and a quarter centuries late to his own eulogy. Let’s please not make another one, now with a little reality under his belt this time around.

Intelligence is a far greater source of existential and global catastrophic risk than any technology that could be wielded by such an intelligence (except, of course, for technologies that would allow an intelligence to increase its own intelligence). Intelligence can invent new technologies and conceive of ways to counteract any defense systems we put in place to protect against the destructive potentials of any given technology. A superintelligence is far more dangerous than rogue nanotech (i.e., grey-goo) or bioweapons. When intelligence comes into play, then all bets are off. I think culture exemplifies this prominently enough. Moreover, for the first time in history the technological solutions to these problems – death, disease, scarcity – are on the conceptual horizon. We can fix these problems ourselves, without creating an effective God relative to Man and incurring the extreme potential for complete human extinction that such a relative superintelligence would entail.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention, and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative superintelligence is antithetical to the defining features of humanity. In order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, we may require the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e., phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e., gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind Uploading. But this isn’t the case; rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind Uploading over biotechnological methods of indefinite lifespans.

Mind Uploading has some ancillary benefits over biotechnological means of indefinite lifespans as well, however. If functional equivalence is validated (i.e., if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional, or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional, or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information, and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play back” an instance of emulation time exactly as it occurred initially means that noise (i.e., sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given microscale component, structure, or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that it would be possible to use such procedures to detect the structural, connectional, and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain, as well.

Note that this iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain emulations, actually remediating those structural, connectional, and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish among the homeostatic, regulatory, and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, thereby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional, and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity – i.e. not a maximally distributed intelligence explosion) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, like being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price-performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis – that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource-efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need to say: migrate into virtuality if you want another physically embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential incomparabilities (i.e., modalities of experience possible in physicality but not possible in VR).

So in summary, yes, Mind Uploading (especially gradual uploading) is more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e., capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

The fallibility of many of these misconceptions may be glaringly obvious, especially to those readers familiar with Mind Uploading as notion and Substrate-Independent Minds and/or Whole-Brain Emulation as disciplines. I may be to some extent preaching to the choir in these cases. But I find many of these misinterpretations far too predominant and recurrent to be left alone.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the sixth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first five chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“, and “Wireless Synapses, Artificial Plasticity, and Neuromodulation“.
***
Electromagnetic Theory of Mind
***

One line of thought I explored during this period of my conceptual work on life extension was concerned with whether it was not the material constituents of the brain manifesting consciousness, but rather the emergent electric or electromagnetic fields generated by the concerted operation of those material constituents, that instantiates mind. This work sprang from reading literature on Karl Pribram’s holonomic-brain theory, in which he developed a “holographic” theory of brain function. A hologram can be cut in half, and, if illuminated, each piece will still retain the whole image, albeit at a loss of resolution. This is due to informational redundancy in the recording procedure (i.e., because it records phase and amplitude, as opposed to just amplitude in normal photography). Pribram’s theory sought to explain the results of experiments in which a patient who had up to half his brain removed and nonetheless retained levels of memory and intelligence comparable to what he possessed prior to the procedure, and to explain the similar results of experiments in which the brain is sectioned and the relative organization of these sections is rearranged without the drastic loss in memory or functionality one would anticipate. These experiments appear to show a holonomic principle at work in the brain. I immediately saw the relation to gradual uploading, particularly the brain’s ability to take over the function of parts recently damaged or destroyed beyond repair. I also saw the emergent electric fields produced by the brain as much better candidates for exhibiting the material properties needed for such holonomic attributes. For one, electromagnetic fields (if considered as waves rather than particles) are continuous, rather than modular and discrete as in the case of atoms.

The electric-field theory of mind also seemed to provide a hypothetical explanatory model for the existence of subjective-continuity through gradual replacement. (Remember that the existence and successful implementation of subjective-continuity is validated by our subjective sense of continuity through normative metabolic replacement of the molecular constituents of our biological neurons— a.k.a. molecular turnover). If the emergent electric or electromagnetic fields of the brain are indeed holonomic (i.e., possess the attribute of holographic redundancy), then a potential explanatory model to account for why the loss of a constituent module (i.e., neuron, neuron cluster, neural network, etc.) fails to cause subjective-discontinuity is provided. Namely, subjective-continuity is retained because the loss of a constituent part doesn’t negate the emergent information (the big picture), but only eliminates a fraction of its original resolution. This looked like empirical support for the claim that it is the electric fields, rather than the material constituents of the brain, that facilitate subjective-continuity.

Another, more speculative aspect of this theory (i.e., not supported by empirical research or literature) involved the hypothesis that the increased interaction among electric fields in the brain (i.e., interference via wave superposition, the result of which is determined by both phase and amplitude) might provide a physical basis for the holographic/holonomic property of “informational redundancy” as well, if it was found that electric fields do not already possess or retain the holographic-redundancy attributes mentioned (i.e., interference via wave superposition, which involves a combination of both phase and amplitude).

A local electromagnetic field is produced by the electrochemical activity of the neuron. This field then undergoes interference with other local fields; and at each point up the scale, we have more fields interfering and combining. The level of disorder makes the claim that salient computation is occurring here dubious, due to the lack of precision and high level of variability which provides an ample basis for dysfunction (including increased noise, lack of a stable — i.e., static or material — means of information storage, and poor signal transduction or at least a high decay rate for signal propagation). However, the fact that they are interfering at every scale means that the local electric fields contain not only information encoding the operational states and functional behavior of the neuron it originated from, but also information encoding the operational states of other neurons by interacting, interfering, and combining with the electric fields produced by those other neurons (by electromagnetic fields interfering and combining in both amplitude and phase, as in holography, and containing information about other neurons by having interfered with their corresponding EM fields; thus if one neuron dies, some of its properties could have been encoded in other EM-waves) appeared to provide a possible physical basis for the brain’s hypothesized holonomic properties.

If electric fields are the physically continuous process that allows for continuity of consciousness (i.e., theories of emergence), then this suggests that computational substrates instantiating consciousness need to exhibit similar properties. This is not a form of vitalism, because I am not postulating that some extra-physical (i.e., metaphysical) process instantiates consciousness, but rather that a material aspect does, and that such an aspect may have to be incorporated in any attempts at gradual substrate replacement meant to retain subjective-continuity through the procedure. It is not a matter of simulating the emergent electric fields using normative computational hardware, because it is not that the electric fields provide the functionality needed, or implement some salient aspect of computation that would otherwise be left out, but rather that the emergent EM fields form a physical basis for continuity and emergence unrelated to functionality but imperative to experiential-continuity or subjectivity—which I distinguish from the type of subjective-continuity thus far discussed, that is, of a feeling of being the same person through the process of gradual substrate replacement—via the term “immediate subjective-continuity”, as opposed to “temporal subjective-continuity”. Immediate subjective-continuity is the capacity to feel, period. Temporal subjective-continuity is the state of feeling like the same person you were. Thus while temporal subjective-continuity inherently necessitates immediate subjective-continuity, immediate subjective-continuity does not require temporal subjective-continuity as a fundamental prerequisite.

Thus I explored variations of NRU-operational-modality that incorporate this (i.e., prosthetics on the cellular scale) particularly the informational-functionalist (i.e., computational) NRUs, as the physical-functionalist NRUs were presumed to instantiate these same emergent fields via their normative operation. The approach consisted of either (a) translating the informational output of the models into the generation of physical fields (either at the end of the process, or throughout by providing the internal area or volume of the unit with a grid composed of electrically conductive nodes, such that the voltage patterns can be physically instantiated in temporal synchrony with the computational model, or (b) constructing the computational substrate instantiating the computational model so as to generate emergent electric fields in a manner as consistent with biological operation as possible (e.g., in the brain a given neuron is never in an electrically neutral state, never completely off, but rather always in a range of values between on and off [see Chapter 2], which means that there is never a break — i.e., spatiotemporal region of discontinuity — in its emergent electric fields; these operational properties would have to be replicated by any computational substrate used to replicate biological neurons via the informationalist-functionalist approach, if the premises that it facilitates immediate subjective-continuity are correct).

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

 

More of Everything for Everyone – Article by Bradley Doucet

More of Everything for Everyone – Article by Bradley Doucet

The New Renaissance Hat
Bradley Doucet
July 4, 2012
******************************
At any given time, I like to be reading one fiction and one non-fiction book. Rarely, though, do my choices dovetail as serendipitously as they did just recently when I was reading Abundance: The Future Is Better Than You Think (2012) by Peter H. Diamandis and Steven Kotler alongside The Diamond Age (1995) by Neal Stephenson. The former is a look at the world-changing technologies coming down the pipe in a variety of fields that promise a brighter future for all of humanity. The latter is a story set in such a future, where diamonds are cheaper than glass.If Stephenson’s world of inexpensive diamonds sounds farfetched to you, consider the entirely factual tale that Diamandis and Kotler use to kick off their book. Once upon a time, you see, aluminum was the world’s most precious metal. As late as the 1800s, aluminum utensils were reserved for the most honoured guests at royal banquets, the other guests having to make do with mere gold utensils. But in fact, aluminum is the third most abundant element in the Earth’s crust, behind oxygen and silicon. It makes up 8.3 percent of the mass of the planet. But it is never found in nature as a pure metal, and early procedures for separating it out of the claylike material called bauxite were prohibitively expensive. Modern procedures have made it so ubiquitous and cheap that we wrap our food in it and then discard it without so much as a second thought.

The moral of the story is that scarcity is often contextual. Technology, as the authors explain, is a “resource-liberating mechanism.” And the technologies being developed right now have the power to liberate enough resources to feed, clothe, educate, and free the world.

The Future Looks Bright

Peter Diamandis is the Chairman and CEO of the X PRIZE Foundation, best known for the $10-million Ansari X PRIZE that launched the private spaceflight industry. He conceived of the project back in 1993 after reading Charles Lindbergh’s The Spirit of St-Louis (1954) and learning about the $25,000 prize funded by Raymond Orteig that spurred Lindbergh to make the first ever non-stop flight from New York to Paris in 1927. Diamandis also holds degrees in molecular biology and aerospace engineering from MIT and a medical degree from Harvard.

Diamandis and his co-author, best-selling writer and journalist Steven Kotler, do not attempt to paper over the plight of the world’s poor, who still lack adequate clean water, food, energy, health care, and education. Still, there has been significant progress “at the bottom” in the past four decades. “During that stretch, the developing world has seen longer life expectancies, lower infant mortality rates, better access to information, communication, education, potential avenues out of poverty, quality health care, political freedoms, economic freedoms, sexual freedoms, human rights, and saved time.”

It is technology that has improved the lot of many of the world’s poor, and in Abundance, we get a quick tour of dozens of the latest exponential technologies that are poised to make serious dents in humanity’s remaining scarcity problems. There is the Lifesaver water purification system, the jerry can version of which can produce 25,000 litres of safe drinking water, enough for a family of four for three years, for only half a cent a day. There is aeroponic vertical farming—essentially a skyscraper filled with suspended plants on every floor being fed through a nutrient-rich mist—which requires 80 percent less land, 90 percent less water, and 100 percent fewer pesticides than current farming practices. There are advances that promise to make solar power more affordable and easier to store, which is going to be huge given that “[t]here is more energy in the sunlight that strikes the Earth’s surface in an hour than all the fossil energy consumed in one year.”

Stephenson’s The Diamond Age actually gets a mention in the chapter on education thanks to its depiction of what experts in artificial intelligence (AI) refer to as a “lifelong learning companion,” which has a central role to play in the novel. The Khan Academy has already shaken things up with its 2,000+ free online educational videos and two million visitors a month as of the summer of 2011. But things will be shaken up again soon enough by these AI tutors that “track learning over the course of one’s lifetime, both insuring a mastery-level education and making exquisitely personalized recommendations about what exactly a student should learn next.” With mobile telephony already sweeping the developing world and with smartphones getting cheaper and more powerful with each passing year, it won’t be long before there’s an AI tutor in every pocket.

Abundance, Freedom, and the Ultimate Resource

To sum up, in the world of the future, although there will be more humans on the planet, each one of us will be far wealthier on average than we are today. We will have more water, more food, more energy, more education, more health care, and make less of an impact on the natural environment to boot. And the healthy, educated, well-fed inhabitants of the world of tomorrow will be freer as well, no longer kept down by force of arms and blight of ignorance. We’ve already had a glimpse of what mobile phones and information technology can accomplish in last year’s Arab Spring, regardless of whether or not Egypt has made the most of the opportunity.

Not that we should be complacent, though. There are no guarantees, and any number of factors could derail us from the path we’re on. But there are powerful forces pushing us in a positive direction. The X PRIZE Foundation is doing its best to spur innovation with various prizes modelled after its initial success. Technophilanthropists like Bill Gates are also doing their part. And then there are the poor themselves, the bottom billions who are becoming the rising billions. As Diamandis and Kotler write, echoing the late Julian Simon, author of The Ultimate Resource:

[T]he greatest tool we have for tackling our grand challenges is the human mind. The information and communications revolution now underway is rapidly spreading across the planet. Over the next eight years, three billion new individuals will be coming online, joining the global conversation, and contributing to the global economy. Their ideas—ideas we’ve never before had access to—will result in new discoveries, products, and inventions that will benefit us all.

I still have a hundred pages or so to go in The Diamond Age, so I don’t know how that story turns out. But in the real world, all signs point to technology-fuelled increases in abundance and freedom in the poorest regions of the planet over the next couple of decades. Abundance encourages us to do everything we can to help those technologies develop and spread, to the benefit of the entire human race.

Bradley Doucet is Le Quebecois Libré‘s English Editor. A writer living in Montreal, he has studied philosophy and economics, and is currently completing a novel on the pursuit of happiness. He also writes for The New Individualist, an Objectivist magazine published by The Atlas Society, and sings.