Browsed by
Tag: nanotechnology

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

Get Your “Supporter of Indefinite Life Extension” Open Badge

Get Your “Supporter of Indefinite Life Extension” Open Badge

The Rational Argumentator is now offering a free Open Badge to any individual who supports the concept of indefinite human life extension. To claim the badge, click here.

This badge was designed by Wendy Stolyarov, whose art you can see here, here, and here.

If you would like to find out more about Open Badges and the empowering role they can have in producing a new Age of Enlightenment, read this essay.

You would need a free account with Mozilla Backpack to receive the badge. And, of course, you would need to think that indefinite human life extension is desirable. That is all!

You would receive the badge for being a supporter of extending human lifespans beyond any fixed limit. Indefinite human life extension includes the defeat of senescence and other diseases, and the achievement of indefinite youthfulness. While indefinite life extension would not make people indestructible and would not eradicate all causes of death, it would nonetheless lift the “inevitable” death sentence that currently hangs over us all.  Indefinite life extension could be achieved in the future through advances in medical technology, including biotechnology, nanotechnology, and information technology.

Even thinking favorably of indefinite life extension is a courageous, forward-thinking, and highly beneficial attitude to take. Enjoy your reward!

Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail – Video by G. Stolyarov II

Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail – Video by G. Stolyarov II

Doomsday predictions are not only silly but bring about harmful ways of approaching life and the world. Mr. Stolyarov expresses his view that there will never be an end of the world, an end of humanity, or an end of civilization. While some genuine existential risks do exist, most of them are not man-made, and even the man-made risks are largely in the past.

References

– “Transhumanism and the 2nd Law of Thermodynamics” – Video by G. Stolyarov II

Update to Resources on Indefinite Life Extension – May 19, 2012

Update to Resources on Indefinite Life Extension – May 19, 2012

TRA’s Resources on Indefinite Life Extension page has been enhanced over the past month with links to numerous fascinating articles and videos.

Articles

– “New Laser For Neurosurgery Allows Greater Precision And Efficiency For Removal Of Complex Turmors” – ScienceDaily – January 28, 2009

– “Tiny Particles May Help Surgeons by Marking Brain Tumors” – ScienceDaily – April 29, 2010

– “Tagging Tumors With Gold: Scientists Use Gold Nanorods to Flag Brain Tumors” – ScienceDaily – October 12, 2011

– “Immortal worms defy aging” – KurzweilAI – February 29, 2012

– “Earth 2512: humans embrace their technologies; reach for the stars” – Dick Pelletier – Positive Futurist – April 2012

– “Teenager Invents Anti-Aging, Disease-Fighting Compound Using Tree Nanoparticles” – Science 2.0 – May 8, 2012

– “A Libertarian Transhumanist Critique of Jeffrey Tucker’s ‘A Lesson in Mortality’” – G. Stolyarov II – May 13, 2012

– “Gene therapy for aging-associated decline” – KurzweilAI – May 16, 2012

– “Breakthrough in Gene Therapy Holds Great Promise” – Joshua Lipana – The Objective Standard – May 16, 2012

Videos

Aubrey de Grey – Debate with Colin Blakemore: “This house wants to defeat ageing entirely”

Part 1 – Main Debate 

Part 2 – Audience Q&A

The Sheldonian Theatre, Oxford University – April 28, 2012

Aziz Aboobaker

Neverending DNA and Immortal Worms – February 27, 2012

G. Stolyarov II

The Real War – and Why Inter-Human Wars Are a Distraction – March 15, 2012

A Libertarian Transhumanist Critique of Jeffrey Tucker’s “A Lesson in Mortality” – May 15, 2012

A Libertarian Transhumanist Critique of Jeffrey Tucker’s “A Lesson in Mortality” – Audio Essay by G. Stolyarov II, Read by Wendy Stolyarov

A Libertarian Transhumanist Critique of Jeffrey Tucker’s “A Lesson in Mortality” – Audio Essay by G. Stolyarov II, Read by Wendy Stolyarov

Mr. Stolyarov, a libertarian transhumanist, offers a rebuttal to the arguments in Jeffrey Tucker’s 2005 essay, “A Lesson in Mortality“.

This essay is read by Wendy Stolyarov.

As a libertarian transhumanist, Mr. Stolyarov sees the defeat of “inevitable” human mortality as the logical outcome of the intertwined forces of free markets and technological progress – the very forces about which Mr. Tucker writes at length.

Read the text of Mr. Stolyarov’s essay here.
Download the MP3 file of this essay here.
Download a vast compendium of audio essays by Mr. Stolyarov and others at TRA Audio.

References

It’s a Jetsons World – Book by Jeffrey Tucker
– “Without Rejecting IP, Progress is Impossible” – Essay by Jeffrey Tucker – July 18, 2010
– “The Quest for Indefinite Life II: The Seven Deadly Things and Why There Are Only Seven” – Essay by Dr. Aubrey de Grey – July 30, 2004
Resources on Indefinite Life Extension (RILE)
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II

A Libertarian Transhumanist Critique of Jeffrey Tucker’s “A Lesson in Mortality” – Article by G. Stolyarov II

A Libertarian Transhumanist Critique of Jeffrey Tucker’s “A Lesson in Mortality” – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
May 13, 2012
******************************

Jeffrey Tucker is one of my favorite pro-technology libertarian thinkers of our time. In his essays and books (see, for instance, It’s a Jetsons World), Mr. Tucker eloquently draws the connection between free markets and technological progress – and how the power of human creativity within a spontaneous order can overcome the obstructions posed by stagnant political and attitudinal paradigms. Mr. Tucker embraces the innovations of the Internet age and has written on their connection with philosophical debates – such as whether the idea of intellectual property is even practically tenable anymore, now that electronic technology renders certain human creations indefinitely reproducible.

Because I see Mr. Tucker as such an insightful advocate of technological progress in a free-market context, I was particularly surprised to read his 2005 article, “A Lesson in Mortality” – where Mr. Tucker contends that death is an inescapable aspect of the human condition. His central argument is best expressed in his own words: “Death impresses upon us the limits of technology and ideology. It comes in time no matter what we do. Prosperity has lengthened life spans and science and entrepreneurship has made available amazing technologies that have forestalled and delayed it. Yet, it must come.” Mr. Tucker further argues that “Modernity has a problem intellectually processing the reality of death because we are so unwilling to defer to the implacable constraints imposed on us within the material world… To recognize the inevitability of death means confessing that there are limits to our power to manufacture a reality for ourselves.

Seven years is a long time, and I am not aware of whether Mr. Tucker’s views on this subject have evolved since this article was published. Here, I offer a rebuttal to his main arguments and invite a response.

To set the context for his article, Mr. Tucker discusses the deaths of short-lived pets within his family – and how his children learned the lesson to grieve for and remember those whom they lost, but then to move on relatively quickly and to proceed with the business of life – “to think about death only when they must, but otherwise to live and love every breath.” While I appreciate the life-embracing sentiment here, I think it concedes too much to death and decay.

As a libertarian transhumanist, I see the defeat of “inevitable” human mortality as the logical outcome of the intertwined forces of free markets and technological progress. While we will not, at any single instant in time, be completely indestructible and invulnerable to all possible causes of death, technological progress – if not thwarted by political interference and reactionary attitudes – will sequentially eliminate causes of death that would have previously killed millions. This has already happened in many parts of the world with regard to killers like smallpox, typhus, cholera, malaria – and many others. It is not a stretch to extrapolate this progression and apply it to perils such as cancer, heart disease, stroke, Alzheimer’s disease, and ALS. Since human life expectancy has already increased roughly five-fold since the Paleolithic era, it is not inconceivable that – with continued progress – another five-fold or greater increase can be achieved.

As biogerontologist and famous life-extension advocate Dr. Aubrey de Grey points out, the seven basic types of damage involved in human senescence are already known – each for at least thirty years. With advances in computing capacity, as well as accelerating medical discoveries that have already achieved life extension in mice, rats, and other small organisms, there is hope that medical progress will arrive at similar breakthroughs for us within our lifetimes. Once life expectancy begins to increase by more than one year for every year of time that passes, we will have reached longevity escape velocity – a condition where the more we live, the more probability we will have of surviving even longer. In February 2012 I began an online compendium of Resources on Indefinite Life Extension, which tracks ongoing developments in this field and provides access to a wide array of media to show that life extension is not just science fiction, but an ongoing enterprise.

To Mr. Tucker, I pose the question of why he appears to think that despite the technological progress and economic freedom whose benefits he clearly recognizes, there would always be some upper limit on human longevity that these incredibly powerful forces would be unable to breach. What evidence exists for such a limit – and, even if such evidence exists, why does Mr. Tucker appear to assume that our currently finite lifespans are not just a result of our ignorance, which could be remedied in a more advanced and enlightened future? In the 15th century, for instance, humans were limited in their technical knowledge from achieving powered flight, even though visionaries such as Leonardo da Vinci correctly anticipated the advent of flying machines. Imagine if a Renaissance scholar made the argument to da Vinci that, while the advances of the Renaissance have surely produced improvements in art, architecture, music, and commerce, nature still imposes insurmountable limits on humans taking to the skies! “Sure,” this scholar might say, “we can now construct taller and sturdier buildings, but the realm of the birds will be forever beyond our reach.” He might say, paraphrasing Mr. Tucker, “[Early] modernity has a problem intellectually processing the reality of eternally grounded humans because we are so unwilling to defer to the implacable constraints imposed on us within the material world. To recognize the inevitability of human grounding means confessing that there are limits to our power to manufacture a reality for ourselves.” What would have happened to a society that fully accepted such arguments? Perhaps the greatest danger we can visit upon ourselves is to consider a problem so “inevitable” that nothing can be done about it. By accepting this inevitability as a foregone conclusion, we foreclose on the inherently unpredictable possibilities that human creativity and innovation can offer. In other words, we foreclose on a better future.

Mr. Tucker writes that “Whole ideologies have been concocted on the supposition that such constraints [on the material world] do not have to exist. That is the essence of socialism. It is the foundation of US imperialism too, with its cocky supposition that there is nothing force cannot accomplish, that there are no limits to the uses of power.” It is a significant misunderstanding of transhumanism to compare it to either socialism or imperialism. Both socialism and imperialism rely on government force to achieve an outcome deemed to be just or expedient. Transhumanism does not depend on force. While governments can and do fund scientific research, this is not an optimal implementation of transhuman aspirations, since government funding of research is notoriously conservative and reluctant to risk taxpayer funds on projects without short-term, visible payoffs about which politicians can boast. Furthermore, government funding of research renders it easier for the research to be thwarted by taxpayers – such as fundamentalist evangelical Christians – who disagree with the aims of such research. The most rapid technological advances can be achieved on a pure free market, where research is neither subsidized nor restricted by any government.

Moreover, force is an exceedingly blunt instrument. While it can be used to some effect to dispose of criminals and tyrants, even there it is tremendously imperfect and imposes numerous unintended negative consequences. Transhumanism is not about attempting to overcome material constraints by using coercion. It is, rather, about improving our understanding of natural laws and our ability to harness mind and matter by giving free rein to human experimentation in applying these laws.

Transhumanism fully embraces Francis Bacon’s dictum that “Nature, to be commanded, must be obeyed.” This means working within material constraints – including the laws of economics – and making the most of what is possible. But this also means using human ingenuity to push out our material limits. As genetic modification of crops has resulted in vastly greater volumes of food production, so can genetic engineering, rejuvenation therapies, and personalized medicine eventually result in vastly longer human lifespans. Transhumanism is the logical extrapolation of a free-market economy. The closer we get to an unfettered free market, the faster we could achieve the transhuman goals of indefinite life extension, universal wealth, space colonization, ubiquitous erudition and high culture, and the conquest of natural and manmade existential risks.

Mr. Tucker writes that recognizing the inevitability of death “is akin to admitting that certain fundamental facts of the world, like the ubiquity of scarcity, cannot be changed. Instead of attempting to change it, we must imagine social systems that come to terms with it. This is the core claim of economic science, and it is also the very reason so many refuse to acknowledge its legitimacy or intellectual binding power.” It is undeniable that scarcity exists, and that scarcity of some sort will always exist. However, there are degrees of scarcity. Food, for instance, is much less scarce today than in the Paleolithic era, when the earth could support barely more than a million humans. Furthermore, in some realms, such as digital media, Mr. Tucker himself has acknowledged that scarcity is no longer a significant limitation – because of the capacity to indefinitely reproduce works of art, music, and writing. With the proximate advent of technologies such as three-dimensional printing and tabletop nano-manufacturing, more and more goods will begin to assume qualities that more closely resemble digital goods. Then, as now, some physical resources will be required to produce anything – and these physical resources would continue to be subject to the constraints of scarcity. But it is not inconceivable that we would eventually end up in a Star Trek world of replicators that can manufacture most small-scale goods out of extremely cheap basic substances, which would render those goods nearly free to reproduce. Even in such a world, more traditional techniques may be required to construct larger structures, but subsequent advances may make even those endeavors faster, cheaper, and more accessible.

At no point in time would human lifespans be infinite (in the sense of complete indestructibility or invulnerability). A world of scarcity is, however, compatible with indefinite lifespans that do not have an upper bound. A person’s life expectancy at any point in time would be finite, but that finite amount might increase faster than the person’s age. Even in the era of longevity escape velocity, some people would still die of accidents, unforeseen illnesses, or human conflicts. But the motivation to conquer these perils will be greatly increased once the upper limit on human lifespans is lifted. Thus, I expect actual human mortality to asymptotically approach zero, though perhaps without ever reaching zero entirely. Still, for a given individual, death would no longer be an inevitability, particularly if that individual behaves in a risk-averse fashion and takes advantage of cutting-edge advancements. Even if death is always a danger on some level, is it not better to act to delay or prevent it – and therefore to get as much time as possible to live, create, and enjoy?

Mr. Tucker writes: “To discover the fountain of youth is a perpetual obsession, one that finds its fulfillment in the vitamin cults that promise immortality. We create government programs to pay for people to be kept alive forever on the assumption that death is always and everywhere unwarranted and ought to be stopped. There is no such thing as ‘natural death’ anymore; the very notion strikes us as a cop out.” It is true that there are and have always been many dubious remedies, promising longevity-enhancing benefits without any evidence. However, even if false remedies are considered, we have come a long way from the Middle Ages, where, in various parts of the world, powders of gold, silver, or lead – or even poisons such as arsenic – were considered to have life-extending powers. More generally, the existence of charlatans, frauds, snake-oil salesmen, and gullible consumers does not discredit genuine, methodical, scientific approaches toward life extension or any other human benefit. Skepticism and discernment are always called for, and we should always be vigilant regarding “cures” that sound too good to be true. Nobody credible has said that conquering our present predicament of mortality would be easy or quick. There is no pill one can swallow, and there is little in terms of lifestyle that one can do today – other than exercising regularly and avoiding obviously harmful behaviors – to materially lengthen one’s lifespan. However, if some of the best minds in the world are able to utilize some of the best technology we have – and to receive the philosophical support of the public and the material support of private donors for doing so – then this situation may change within our lifetimes. It is far better to live with this hope, and to work toward this outcome, than to resign oneself to the inevitability of death.

As regards government programs, I find no evidence for Mr. Tucker’s assertion that these programs are the reason that people are being kept alive longer. Implicit in that assertion is the premise that, on a fully free market (where the cost of high-quality healthcare would ultimately be cheaper), people would not voluntarily pay to extend the lives of elderly or seriously ill patients to the same extent that they expect such life extension to occur when funded by Medicare or by the national health-care systems in Canada and Europe. Indeed, Mr. Tucker’s assertion here poses a serious danger to defenders of the free market. It renders them vulnerable to the allegation that an unfettered free market would shorten life expectancies and invite the early termination of elderly or seriously ill patients – in short, the classic nightmare scenario of eliminating the weak, sickly, or otherwise “undesirable” elements. This is precisely what a free market would not result in, because the desire to live is extremely strong for most individuals, and free individuals using their own money would be much more likely to put it toward keeping themselves alive than would a government-based system which must ultimately ration care in one way or another.

Mr. Tucker writes: “Thus do we insist on always knowing the ‘cause’ of death, as if it only comes about through an exogenous intervention, like hurricanes, traffic accidents, shootings, and bombs. But even when a person dies of his own accord, we always want to know so that we have something to blame. Heart failure? Well, he or she might have done a bit more exercise. Let this be a lesson. Cancer? It’s probably due to smoking, or perhaps second-hand smoke. Or maybe it was the carcinogens introduced by food manufacturers or factories. We don’t want to admit that it was just time for a person to die.” Particularly as Austrian Economics, of which Mr. Tucker is a proponent, champions a rigorous causal analysis of phenomena, the above excerpt strikes me as incongruous with how rational thinkers ought to approach any event. Clearly, there are no uncaused events; there is nothing inexplicable in nature. Sometimes the explanations may be difficult or complex to arrive at; sometimes our minds are too limited to grasp the explanations at our present stage of knowledge and technological advancement. However, all valid questions are ultimately answerable, and all problems are ultimately solvable – even if not by us. The desire to know the cause of a death is a desire to know the answers to important questions, and to derive value from such answers by perhaps gathering information that would help oneself and others avoid a similar fate. To say that “it was just time for a person to die” explains nothing; it only attempts to fill in the gaps in our knowledge with an authoritative assertion that forecloses on further inquiry and discovery. While this may, to some, be comforting as a way of “moving on” – to me and other transhumanists it is an eminently frustrating way of burying the substance of the matter with a one-liner.

Mr. Tucker also compares death to sleep: “The denial of death’s inevitability is especially strange since life itself serves up constant reminders of our physical limits. Sleep serves as a kind of metaphor for death. We can stay awake working and having fun up to 18 hours, even 24 or 36, but eventually we must bow to our natures and collapse and sleep. We must fall unconscious so that we can be revived to continue on with our life.” While sleep is a suspension of some activities, death and sleep could not be more different. Sleep is temporary, while death is permanent. Sleep preserves significant aspects of consciousness, as well as a continuity of operations for the brain and the rest of the body. While one sleeps, one’s brain is hard at work “repackaging” the contents of one’s memory to prepare one for processing fresh experiences the next day. Death, on the other hand, is not a preparation for anything. It is the cessation of the individual, not a buildup to something greater or more active. In “How Can I Live Forever: What Does or Does Not Preserve the Self”, I describe the fundamental difference between processes, such as sleep, which preserve the basic continuity of bodily functions (and thus one’s unique vantage point or “I-ness”) and processes that breach this continuity and result in the cessation of one’s being. Continuity-preserving processes are fundamentally incomparable to continuity-breaching processes, and thus the ubiquity and necessity of sleep can tell us nothing regarding death.

Mr. Tucker validly notes that the human desire to live forever can manifest itself in the desire to leave a legacy and to create works that outlive the individual. This is an admirable sentiment, and it is one that has fueled the progress of human civilization even in eras when mortality was truly inevitable. I am glad that our ancestors had this motivation to overcome the sense of futility and despair that their individual mortality would surely have engendered otherwise. But we, standing on their shoulders and benefiting from their accomplishments, can do better. The wonders of technological progress within the near term, about which Mr. Tucker writes eloquently and at length, can be extrapolated to the medium and long term in order for us to see that the transhumanist ideal of indefinite life extension is both feasible and desirable. Free markets, entrepreneurship, and human creativity will help pave the way to the advances that could save us from the greatest peril of them all. I hope that, in time, Mr. Tucker will embrace this prospect as the incarnation, not the enemy, of libertarian philosophy and rational, free-market economics.
Update to Resources on Indefinite Life Extension – April 19, 2012

Update to Resources on Indefinite Life Extension – April 19, 2012

TRA’s Resources on Indefinite Life Extension page has been expanded today with links to several engaging articles, some describing recent groundbreaking discoveries in layman-accessible terms.

– “Group Set To Sequence 1000 Genomes By The End Of The Year” – Peter Murray – Singularity Hub – April 4, 2012

– “Nanostars Deliver Cancer Drugs Direct To Nucleus” – Catharine Paddock, PhD – Medical News Today – April 8, 2012

– “Human-machine interfaces: becoming one with our machines” – Dick Pelletier – Positive Futurist – April 2012

– “Fullerene C60 administration doubles rat lifespan with no toxicity” – KurzweilAI.net – April 17, 2012

– “Eternal health and youth will soon be possible, scientists say” – Dick Pelletier – Positive Futurist – April 17, 2012