Browsed by
Tag: biology

Intimations of Imitations: Visions of Cellular Prosthesis and Functionally Restorative Medicine – Article by Franco Cortese

Intimations of Imitations: Visions of Cellular Prosthesis and Functionally Restorative Medicine – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 23, 2013
******************************

In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion-Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.

In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e., ion-channels, ion-pumps, and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation, if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e., lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension, however, and the field is, to my knowledge, yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purposes of indefinite longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.

While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 1970s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.

The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity, and permeability throughout the 1980s, 1990s, and 2000s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and to a lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].

Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpose of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to — rather than replace existing biological neurons — become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion in general had already been conceived (and successfully created/experimentally verified, though presumably not integrated in vivo).

The field of Functionally Restorative Medicine (and the orphan sub-field of whole-brain gradual-substrate replacement, or “physically embodied” brain-emulation, if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels, and ion pumps) by MEMS (micro-electrocal-mechanical systems) or more likely NEMS (nano-electro-mechanical systems).

The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g., bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e., dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological, and experimental infrastructures developed for the fields of Synthetic Ion Channels and Ion-Channel/Artificial-Membrane Reconstitution can be utilized for the purpose of (a) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional and operational modality to their normal biological counterparts) and/or (b) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.

Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.

A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.

For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.

First Survey

Unimolecular ion channels:

Examples include (a) synthetic ion channels with oligocrown ionophores, [5] (b) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and (c) modified varieties of the b-helical scaffold of gramicidin A [7].

Barrel-stave supramolecules:

Examples of this general class falling include voltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].

Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:

Examples of this sub-class include synthetic ion channels formed by (a) macrocyclic, branched and linear bolaamphiphiles, and dimeric steroids, [9] and by (b) non-peptide macrocycles, acyclic analogs, and peptide macrocycles (respectively) containing abiotic amino acids [10].

Dimeric steroid staves:

Examples of this sub-class include channels using polydroxylated norcholentriol dimers [11].

p-Oligophenyls as staves in rigid-rod ß-barrels:

Examples of this sub-class include “cylindrical self-assembly of rigid-rod ß-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].

Synthetic polymers:

Examples of this sub-class include synthetic ion channels and pores comprised of (a) polyalanine, (b) polyisocyanates, (c) polyacrylates, [13] formed by (i) ionophoric, (ii) ‘smart’, and (iii) cationic polymers [14]; (d) surface-attached poly(vinyl-n-alkylpyridinium) [15]; (e) cationic oligo-polymers [16], and (f) poly(m-phenylene ethylenes) [17].

Helical b-peptides (used as staves in barrel-stave method):

Examples of this class include cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].

Monomeric steroids:

Examples of this sub-class include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics that may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21], and supramolecular ion channels [22].

Complex minimalist systems:

Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self-assemble into supramolecular pores or exhibit antibiotic activity [25].

Non-peptide macrocycles as hoops:

Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].

Peptide macrocycles as hoops and staves:

Examples of this sub-class include (a) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic ß-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; (b) synthetic carriers, antibiotics (and analogs), and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. Certain macrocycles may act as ß-sheets, possibly as staves of ß-barrel-like pores [29]; (c) bioengineered pores as sensors. Covalent capturing and fragmentations have been observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].

Summary

Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized, and refined.

Second Survey

The following selections [31] illustrate the chemical, structural, and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration, and experimental verification already existent. Permission to use all the following selections and figures was obtained from the author of the source.

There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:

A: Unimolecular macromolecules,
B: Complex barrel-stave,
C: Barrel-rosette,
D: Barrel hoop, and
E: Micellar supramolecules.

Cation Conducting Channels:

UNIMOLECULAR

“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].

“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono– (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation-selective and voltage-dependent” [34].

“Later, Kokube et al. synthesized another channel comprising of resorcinol-based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to form a dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].

“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chains as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].

“A covalently bonded macrotetracycle (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].

“Inorganic derivative using crown ethers have also been synthesized. Hall et al. synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45].

B-STAVES:

“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D– and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].

“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel ß-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].

“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].

“By attaching peptides to the octiphenyl scaffold, a ß-barrel can be formed via self-assembly through the formation of ß-sheet structures between the peptide chains (Figure 1.13)” [53].

“The same scaffold was used by Matile et al. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].

“Attaching the electron-poor naphthalene diimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the complementary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].

MICELLAR

“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].

“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). Figure 1.15 shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].

ANION CONDUCTING CHANNELS:

“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al. (Figure 1.16). Oligoether chains were attached to the primary face of the ß-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I- > Br- > Cl- (following Hofmeister series)” [61].

“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-” [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].

Cellular Prosthesis: Inklings of a New Interdisciplinary Approach

The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.

However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally Restorative Medicine is possible through taking the technologies and techniques involved in constructing, integrating, and experimentally verifying either (a) non-biological analogues of ion-channels and ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and (b) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neocortex and prefrontal cortex, and through iterative procedures of gradual replacement thereby achieving indefinite longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e., design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in vivo.

The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e., the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic, and/or molecular physically embodied) and functional emulation of biological cells.

The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the fields of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in the fields of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e., sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA, and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps, and sections of bilipid membrane) directly.

Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially synthesized but chemically/structurally equivalent biological analogues.

This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally Restorative Medicine. I suggest the designation ‘cellular prosthesis’.

References:

[1] Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.

[2] Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.

[3] Matile, S., Som, A., & Sorde, N. (2004). Recent synthetic ion channels and pores. Tetrahedron, 60(31), 6405–6435. ISSN 0040–4020, 10.1016/j.tet.2004.05.052. Access: http://www.sciencedirect.com/science/article/pii/S0040402004007690:

[4] XIAO, F., (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[5] Ibid., p. 6411.

[6] Ibid., p. 6416.

[7] Ibid., p. 6413.

[8] Ibid., p. 6412.

[9] Ibid., p. 6414.

[10] Ibid., p. 6425.

[11] Ibid., p. 6427.

[12] Ibid., p. 6416.

[13] Ibid., p. 6419.

[14] Ibid.

[15] Ibid.

[16] Ibid., p. 6419.

[17] Ibid.

[18] Ibid., p. 6421.

[19] Ibid., p. 6422.

[20] Ibid.

[21] Ibid.

[22] Ibid.

[23] Ibid., p. 6423.

[24] Ibid.

[25] Ibid.

[26] Ibid., p. 6426.

[27] Ibid.

[28] Ibid., p. 6427.

[29] Ibid., p. 6327.

[30] Ibid., p. 6427.

[31] XIAO, F. (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[32] Ibid., p. 4.

[33] Ibid.

[34] Ibid.

[35] Ibid.

[36] Ibid., p. 7.

[37] Ibid., p. 8.

[38] Ibid., p. 7.

[39] Ibid.

[40] Ibid.

[41] Ibid.

[42] Ibid.

[43] Ibid., p. 8.

[44] Ibid.

[45] Ibid., p. 9.

[46] Ibid.

[47] Ibid.

[48] Ibid., p. 10.

[49] Ibid.

[50] Ibid.

[51] Ibid.

[52] Ibid., p. 11.

[53] Ibid., p. 12.

[54] Ibid.

[55] Ibid.

[56] Ibid.

[57] Ibid.

[58] Ibid., p. 13.

[59] Ibid.

[60] Ibid., p. 14.

[61] Ibid.

[62] Ibid.

[63] Ibid., p. 15.

[64] Ibid.

[65] Ibid.

[66] Cortese, F., (2013). Concepts for Functional Replication of Biological Neurons. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/gradual-neuron-replacement/

[67] Cortese, F., (2013). Gradual Neuron Replacement for the Preservation of Subjective-Continuity. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/gradual-neuron-replacement/

[68] Cortese, F., (2013). Wireless Synapses, Artificial Plasticity, and Neuromodulation. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/wireless-synapses/

[69] Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 3, 2013
******************************
This essay is the tenth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first nine chapters were previously published on The Rational Argumentator under the following titles:
***

Operational Continuity

One of the reasons for continuing conceptual development of the physical-functionalist NRU (neuron-replication-unit) approach, despite the perceived advantages of the informational-functionalist approach, was in the event that computational emulation would either fail to successfully replicate a given physical process (thus a functional-modality concern) or fail to successfully maintain subjective-continuity (thus an operational-modality concern), most likely due to a difference in the physical operation of possible computational substrates compared to the physical operation of the brain (see Chapter 2). In regard to functionality, we might fail to computationally replicate (whether in simulation or emulation) a relevant physical process for reasons other than vitalism. We could fail to understand the underlying principles governing it, or we might understand its underlying principles so as to predictively model it yet still fail to understand how it affects the other processes occurring in the neuron—for instance if we used different modeling techniques or general model types to model each component, effectively being able to predictively model each individually while being unable to model how they affect eachother due to model untranslatability. Neither of these cases precludes the aspect in question from being completely material, and thus completely potentially explicable using the normative techniques we use to predictively model the universe. The physical-functionalist approach attempted to solve these potential problems through several NRU sub-classes, some of which kept certain biological features and functionally replaced certain others, and others that kept alternate biological features and likewise functionally replicated alternate biological features. These can be considered as varieties of biological-nonbiological NRU hybrids that functionally integrate those biological features into their own, predominantly non-biological operation, as they exist in the biological nervous system, which we failed to functionally or operationally replicate successfully.

The subjective-continuity problem, however, is not concerned with whether something can be functionally replicated but with whether it can be functionally replicated while still retaining subjective-continuity throughout the procedure.

This category of possible basis for subjective-continuity has stark similarities to the possible problematic aspects (i.e., operational discontinuity) of current computational paradigms and substrates discussed in Chapter 2. In that case it was postulated that discontinuity occurred as a result of taking something normally operationally continuous and making it discontinuous: namely, (a) the fact that current computational paradigms are serial (whereas the brain has massive parallelism), which may cause components to only be instantiated one at a time, and (b) the fact that the resting membrane potential of biological neurons makes them procedurally continuous—that is, when in a resting or inoperative state they are still both on and undergoing minor fluctuations—whereas normative logic gates both do not produce a steady voltage when in an inoperative state (thus being procedurally discontinuous) and do not undergo minor fluctuations within such a steady-state voltage (or, more generally, a continuous signal) while in an inoperative state. I had a similar fear in regard to some mathematical and computational models as I understood them in 2009: what if we were taking what was a continuous process in its biological environment, and—by using multiple elements or procedural (e.g., computational, algorithmic) steps to replicate what would have been one element or procedural step in the original—effectively making it discontinuous by introducing additional intermediate steps? Or would we simply be introducing a number of continuous steps—that is, if each element or procedural step were operationally continuous in the same way that the components of a neuron are, would it then preserve operational continuity nonetheless?

This led to my attempting to develop a modeling approach aiming to retain the same operational continuity as exists in biological neurons, which I will call the relationally isomorphic mathematical model. The biophysical processes comprising an existing neuron are what implements computation; by using biophysical-mathematical models as our modeling approach, we might be introducing an element of discontinuity by mathematically modeling the physical processes giving rise to a computation/calculation, rather than modeling the computation/calculation directly. It might be the difference between modeling a given program, and the physical processes comprising the logic elements giving rise to the program. Thus, my novel approach during this period was to explore ways to model this directly.

Rather than using a host of mathematical operations to model the physical components that themselves give rise to a different type of mathematics, we instead use a modeling approach that maintains a 1-to-1 element or procedural-step correspondence with the level-of-scale that embodies the salient (i.e., aimed-for) computation. My attempts at developing this produced the following approach, though I lack the pure mathematical and computer-science background to judge its true accuracy or utility. The components, their properties, and the inputs used for a given model (at whatever scale) are substituted by numerical values, the magnitude of which preserves the relationships (e.g., ratio relationships) between components/properties and inputs, and by mathematical operations which preserve the relationships exhibited by their interaction. For instance: if the interaction between a given component/property and a given input produces an emergent inhibitory effect biologically, then one would combine them to get their difference or their factors, respectively, depending on whether they exemplify a linear or nonlinear relationship. If the component/property and the input combine to produce emergently excitatory effects biologically, one would combine them to get their sum or products, respectively, depending on whether they increased excitation in a linear or nonlinear manner.

In an example from my notes, I tried to formulate how a chemical synapse could be modeled in this way. Neurotransmitters are given analog values such as positive or negative numbers, the sign of which (i.e., positive or negative) depends on whether it is excitatory or inhibitory and the magnitude of which depends on how much more excitatory/inhibitory it is than other neurotransmitters, all in reference to a baseline value (perhaps 0 if neutral or neither excitatory nor inhibitory; however, we may need to make this a negative value, considering that the neuron’s resting membrane-potential is electrically negative, and not electrochemically neutral). If they are neurotransmitter clusters, then one value would represent the neurotransmitter and another value its quantity, the sum or product of which represents the cluster. If the neurotransmitter clusters consist of multiple neurotransmitters, then two values (i.e., type and quantity) would be used for each, and the product of all values represents the cluster. Each summative-product value is given a second vector value separate from its state-value, representing its direction and speed in the 3D space of the synaptic junction. Thus by summing the products of all, the numerical value should contain the relational operations each value corresponds to, and the interactions and relationships represented by the first- and second-order products. The key lies in determining whether the relationship between two elements (e.g., two neurotransmitters) is linear (in which case they are summed), or nonlinear (in which case they are combined to produce a product), and whether it is a positive or negative relationship—in which case their factor, rather than their difference, or their product, rather than their sum, would be used. Combining the vector products would take into account how each cluster’s speed and position affects the end result, thus effectively emulating the process of diffusion across the synaptic junction. The model’s past states (which might need to be included in such a modeling methodology to account for synaptic plasticity—e.g., long-term potentiation and long-term modulation) would hypothetically be incorporated into the model via a temporal-vector value, wherein a third value (position along a temporal or “functional”/”operational” axis) is used when combining the values into a final summative product. This is similar to such modeling techniques as phase-space, which is a quantitative technique for modeling a given system’s “system-vector-states” or the functional/operational states it has the potential to possess.

How excitatory or inhibitory a given neurotransmitter is may depend upon other neurotransmitters already present in the synaptic junction; thus if the relationship between one neurotransmitter and another is not the same as that first neurotransmitter and an arbitrary third, then one cannot use static numerical values for them because the sequence in which they were released would affect how cumulatively excitatory or inhibitory a given synaptic transmission is.

A hypothetically possible case of this would be if one type of neurotransmitter can bond or react with two or more types of neurotransmitter. Let’s say that it’s more likely to bond or react with one than with the other. If the chemically less attractive (or reactive) one were released first, it would bond anyways due to the absence of the comparatively more chemically attractive one, such that if the more attractive one were released thereafter, then it wouldn’t bond because the original one would have already bonded with the chemically less attractive one.

If a given neurotransmitter’s numerical value or weighting is determined by its relation to other neurotransmitters (i.e., if one is excitatory, and another is twice as excitatory, then if the first was 1.5, the second would be 3—assuming a linear relationship), and a given neurotransmitter does prove to have a different relationship to one neurotransmitter than it does another, then we cannot use a single value for it. Thus we might not be able to configure it such that the normative mathematical operations follow naturally from each other; instead, we may have to computationally model (via the [hypothetically] subjectively discontinuous method that incurs additional procedural steps) which mathematical operations to perform, and then perform them continuously without having to stop and compute what comes next, so as to preserve subjective-continuity.

We could also run the subjectively discontinuous model at a faster speed to account for its higher quantity of steps/operations and the need to keep up with the relationally isomorphic mathematical model, which possesses comparatively fewer procedural steps. Thus subjective-continuity could hypothetically be achieved (given the validity of the present postulated basis for subjective-continuity—operational continuity) via this method of intermittent external intervention, even if we need extra computational steps to replicate the single informational transformations and signal-combinations of the relationally isomorphic mathematical model.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Mitochondrially Targeted Antioxidant SS-31 Reverses Some Measures of Aging in Muscle – Article by Reason

Mitochondrially Targeted Antioxidant SS-31 Reverses Some Measures of Aging in Muscle – Article by Reason

The New Renaissance Hat
Reason
May 26, 2013
******************************

Originally published on the Fight Aging! website.

Antioxidants of the sort you can buy at the store and consume are pretty much useless: the evidence shows us that they do nothing for health, and may even work to block some beneficial mechanisms. Targeting antioxidant compounds to the mitochondria in our cells is a whole different story, however. Mitochondria are swarming bacteria-like entities that produce the chemical energy stores used to power cellular processes. This involves chemical reactions that necessarily generate reactive oxygen species (ROS) as a byproduct, and these tend to react with and damage protein machinery in the cell. The machinery that gets damaged the most is that inside the mitochondria, of course, right at ground zero for ROS production. There are some natural antioxidants present in mitochondria, but adding more appears to make a substantial difference to the proportion of ROS that are soaked up versus let loose to cause harm.

If mitochondria were only trivially relevant to health and longevity, this wouldn’t be a terribly interesting topic, and I wouldn’t be talking about it. The evidence strongly favors mitochondrial damage as an important contribution to degenerative aging, however. Most damage in cells is repaired pretty quickly, and mitochondria are regularly destroyed and replaced by a process of division – again, like bacteria. Some rare forms of mitochondrial damage persist, however, eluding quality-control mechanisms and spreading through the mitochondrial population in a cell. This causes cells to fall into a malfunctioning state in which they export massive quantities of ROS out into surrounding tissue and the body at large. As you age, ever more of your cells suffer this fate.

In recent years a number of research groups have been working on ways to deliver antioxidants to the mitochondria, some of which are more relevant to future therapies than others. For example gene therapies to boost levels of natural mitochondrial antioxidants like catalase are unlikely to arrive in the clinic any time soon, but they serve to demonstrate significance by extending healthy life in mice. A Russian research group has been working with plastinquinone compounds that can be ingested and then localize to the mitochondria, and have shown numerous benefits to result in animal studies of the SkQ series of drug candidates.

US-based researchers have been working on a different set of mitochondrially targeted antioxidant compounds, with a focus on burn treatment. However, they recently published a paper claiming reversal of some age-related changes in muscle tissue in mice using their drug candidate SS-31. Note that this is injected, unlike SkQ compounds:

Mitochondrial targeted peptide rapidly improves mitochondrial energetics and skeletal muscle performance in aged mice

Quote:

Mitochondrial dysfunction plays a key pathogenic role in aging skeletal muscle resulting in significant healthcare costs in the developed world. However, there is no pharmacologic treatment to rapidly reverse mitochondrial deficits in the elderly. Here we demonstrate that a single treatment with the mitochondrial targeted peptide SS-31 restores in vivo mitochondrial energetics to young levels in aged mice after only one hour.

Young (5 month old) and old (27 month old) mice were injected intraperitoneally with either saline or 3 mg/kg of SS-31. Skeletal muscle mitochondrial energetics were measured in vivo one hour after injection using a unique combination of optical and 31 P magnetic resonance spectroscopy. Age-related declines in resting and maximal mitochondrial ATP production, coupling of oxidative phosphorylation (P/O), and cell energy state (PCr/ATP) were rapidly reversed after SS-31 treatment, while SS-31 had no observable effect on young muscle.

These effects of SS-31 on mitochondrial energetics in aged muscle were also associated with a more reduced glutathione redox status and lower mitochondrial [ROS] emission. Skeletal muscle of aged mice was more fatigue resistant in situ one hour after SS-31 treatment and eight days of SS-31 treatment led to increased whole animal endurance capacity. These data demonstrate that SS-31 represents a new strategy for reversing age-related deficits in skeletal muscle with potential for translation into human use.

So what is SS-31? If look at the publication history for these authors you’ll find a burn-treatment-focused open-access paper that goes into a little more detail and a 2008 review paper that covers the pharmacology of the SS compounds:

Quote:

The SS peptides, so called because they were designed by Hazel H. Sezto and Peter W. Schiler, are small cell-permeable peptides of less than ten amino acid residues that specifically target to inner mitochondrial membrane and possess mitoprotective properties. There have been a series of SS peptides synthesized and characterized, but for our study, we decided to use SS-31 peptide (H-D-Arg-Dimethyl Tyr-Lys-Phe-NH2) for its well-documented efficacy.

Studies with isolated mitochondrial preparations and cell cultures show that these SS peptides can scavenge ROS, reduce mitochondrial ROS production, and inhibit mitochondrial permeability transition. They are very potent in preventing apoptosis and necrosis induced by oxidative stress or inhibition of the mitochondrial electron transport chain. These peptides have demonstrated excellent efficacy in animal models of ischemia-reperfusion, neurodegeneration, and renal fibrosis, and they are remarkably free of toxicity.

Given the existence of a range of different types of mitochondrial antioxidant and research groups working on them, it seems that we should expect to see therapies emerge into the clinic over the next decade. As ever, the regulatory regime will ensure that they are only approved for use in treatment of specific named diseases and injuries such as burns, however. It’s still impossible to obtain approval for a therapy to treat aging in otherwise healthy individuals in the US, as the FDA doesn’t recognize degenerative aging as a disease. The greatest use of these compounds will therefore occur via medical tourism and in a growing black market for easily synthesized compounds of this sort.

In fact, any dedicated and sufficiently knowledgeable individual could already set up a home chemistry lab, download the relevant papers, and synthesize SkQ or SS compounds. That we don’t see this happening is, I think, more of a measure of the present immaturity of the global medical tourism market than anything else. It lacks an ecosystem of marketplaces and review organizations that would allow chemists to safely participate in and profit from regulatory arbitrage of the sort that is ubiquitous in recreational chemistry.

Reason is the founder of The Longevity Meme (now Fight Aging!). He saw the need for The Longevity Meme in late 2000, after spending a number of years searching for the most useful contribution he could make to the future of healthy life extension. When not advancing the Longevity Meme or Fight Aging!, Reason works as a technologist in a variety of industries.  

This work is reproduced here in accord with a Creative Commons Attribution license.  It was originally published on FightAging.org.

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

Squishy Machines: Bio-Cybernetic Neuron Hybrids – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 25, 2013
******************************
This essay is the eighth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first seven chapters were previously published on The Rational Argumentator under the following titles:
***

By 2009 I felt the major classes of physicalist-functionalist replication approaches to be largely developed, producing now only potential minor variations in approach and procedure. These developments consisted of contingency plans in the case that some aspect of neuronal operation couldn’t be replicated with alternate, non-biological physical systems and processes, based around the goal of maintaining those biological (or otherwise organic) systems and processes artificially and of integrating them with the processes that could be reproduced artificially.

2009 also saw further developments in the computational approach, where I conceptualized a new sub-division in the larger class of the informational-functionalist (i.e., computational, which encompasses both simulation and emulation) replication approach, which is detailed in the next chapter.

Developments in the Physicalist Approach

During this time I explored mainly varieties of the cybernetic-physical functionalist approach. This involved the use of replicatory units that preserve certain biological aspects of the neuron while replacing certain others with functionalist replacements, and other NRUs that preserved alternate biological aspects of the neuron while replacing different aspects with functional replacements. The reasoning behind this approach was twofold. The first was that there was a chance, no matter how small, that we might fail to sufficiently replicate some relevant aspect(s) of the neuron either computationally or physically by failing to understand the underlying principles of that particular sub-process/aspect. The second was to have an approach that would work in the event that there was some material aspect that couldn’t be sufficiently replicated via non-biological physically embodied systems (i.e., the normative physical-functionalist approach).

However, these varieties were conceived of in case we couldn’t replicate certain components successfully (i.e., without functional divergence). The chances of preserving subjective-continuity in such circumstances are increased by the number of varieties we have for this class of model (i.e., different arrangements of mechanical replacement components and biological components), because we don’t know which we would fail to functionally replicate.

This class of physical-functionalist model can be usefully considered as electromechanical-biological hybrids, wherein the receptors (i.e., transporter proteins) on the post-synaptic membrane are integrated with the artificial membrane and in coexistence with artificial ion-channels, or wherein the biological membrane is retained while the receptor and ion-channels are replaced with functional equivalents instead. The biological components would be extracted from the existing biological neurons and reintegrated with the artificial membrane. Otherwise they would have to be synthesized via electromechanical systems, such as, but not limited to, the use of chemical stores of amino-acids released in specific sequences to facilitate in vivo protein folding and synthesis, which would then be transported to and integrated with the artificial membrane. This is better than providing stores of pre-synthesized proteins, due to more complexities in storing synthesized proteins without decay or functional degradation over storage-time, and in restoring them from their “stored”, inactive state to a functionally-active state when they were ready for use.

During this time I also explored the possibility of using the neuron’s existing protein-synthesis systems to facilitate the construction and gradual integration of the artificial sections with the existing lipid bilayer membrane. Work in synthetic biology allows us to use viral gene vectors to replace a given cell’s constituent genome—and consequently allowing us to make it manufacture various non-organic substances in replacement of the substances created via its normative protein-synthesis. We could use such techniques to replace the existing protein-synthesis instructions with ones that manufacture and integrate the molecular materials constituting the artificial membrane sections and artificial ion-channels and ion-pumps. Indeed, it may even be a functional necessity to gradually replace a given neuron’s protein-synthesis machinery with protein-synthesis-based machinery for the replacement, integration and maintenance of the non-biological sections’ material, because otherwise those parts of the neuron would still be trying to rebuild each section of lipid bilayer membrane we iteratively remove and replace. This could be problematic, and so for successful gradual replacement of single neurons, a means of gradually switching off and/or replacing portions of the cell’s protein-synthesis systems may be required.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

Mind as Interference with Itself: A New Approach to Immediate Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the sixth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first five chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“, and “Wireless Synapses, Artificial Plasticity, and Neuromodulation“.
***
Electromagnetic Theory of Mind
***

One line of thought I explored during this period of my conceptual work on life extension was concerned with whether it was not the material constituents of the brain manifesting consciousness, but rather the emergent electric or electromagnetic fields generated by the concerted operation of those material constituents, that instantiates mind. This work sprang from reading literature on Karl Pribram’s holonomic-brain theory, in which he developed a “holographic” theory of brain function. A hologram can be cut in half, and, if illuminated, each piece will still retain the whole image, albeit at a loss of resolution. This is due to informational redundancy in the recording procedure (i.e., because it records phase and amplitude, as opposed to just amplitude in normal photography). Pribram’s theory sought to explain the results of experiments in which a patient who had up to half his brain removed and nonetheless retained levels of memory and intelligence comparable to what he possessed prior to the procedure, and to explain the similar results of experiments in which the brain is sectioned and the relative organization of these sections is rearranged without the drastic loss in memory or functionality one would anticipate. These experiments appear to show a holonomic principle at work in the brain. I immediately saw the relation to gradual uploading, particularly the brain’s ability to take over the function of parts recently damaged or destroyed beyond repair. I also saw the emergent electric fields produced by the brain as much better candidates for exhibiting the material properties needed for such holonomic attributes. For one, electromagnetic fields (if considered as waves rather than particles) are continuous, rather than modular and discrete as in the case of atoms.

The electric-field theory of mind also seemed to provide a hypothetical explanatory model for the existence of subjective-continuity through gradual replacement. (Remember that the existence and successful implementation of subjective-continuity is validated by our subjective sense of continuity through normative metabolic replacement of the molecular constituents of our biological neurons— a.k.a. molecular turnover). If the emergent electric or electromagnetic fields of the brain are indeed holonomic (i.e., possess the attribute of holographic redundancy), then a potential explanatory model to account for why the loss of a constituent module (i.e., neuron, neuron cluster, neural network, etc.) fails to cause subjective-discontinuity is provided. Namely, subjective-continuity is retained because the loss of a constituent part doesn’t negate the emergent information (the big picture), but only eliminates a fraction of its original resolution. This looked like empirical support for the claim that it is the electric fields, rather than the material constituents of the brain, that facilitate subjective-continuity.

Another, more speculative aspect of this theory (i.e., not supported by empirical research or literature) involved the hypothesis that the increased interaction among electric fields in the brain (i.e., interference via wave superposition, the result of which is determined by both phase and amplitude) might provide a physical basis for the holographic/holonomic property of “informational redundancy” as well, if it was found that electric fields do not already possess or retain the holographic-redundancy attributes mentioned (i.e., interference via wave superposition, which involves a combination of both phase and amplitude).

A local electromagnetic field is produced by the electrochemical activity of the neuron. This field then undergoes interference with other local fields; and at each point up the scale, we have more fields interfering and combining. The level of disorder makes the claim that salient computation is occurring here dubious, due to the lack of precision and high level of variability which provides an ample basis for dysfunction (including increased noise, lack of a stable — i.e., static or material — means of information storage, and poor signal transduction or at least a high decay rate for signal propagation). However, the fact that they are interfering at every scale means that the local electric fields contain not only information encoding the operational states and functional behavior of the neuron it originated from, but also information encoding the operational states of other neurons by interacting, interfering, and combining with the electric fields produced by those other neurons (by electromagnetic fields interfering and combining in both amplitude and phase, as in holography, and containing information about other neurons by having interfered with their corresponding EM fields; thus if one neuron dies, some of its properties could have been encoded in other EM-waves) appeared to provide a possible physical basis for the brain’s hypothesized holonomic properties.

If electric fields are the physically continuous process that allows for continuity of consciousness (i.e., theories of emergence), then this suggests that computational substrates instantiating consciousness need to exhibit similar properties. This is not a form of vitalism, because I am not postulating that some extra-physical (i.e., metaphysical) process instantiates consciousness, but rather that a material aspect does, and that such an aspect may have to be incorporated in any attempts at gradual substrate replacement meant to retain subjective-continuity through the procedure. It is not a matter of simulating the emergent electric fields using normative computational hardware, because it is not that the electric fields provide the functionality needed, or implement some salient aspect of computation that would otherwise be left out, but rather that the emergent EM fields form a physical basis for continuity and emergence unrelated to functionality but imperative to experiential-continuity or subjectivity—which I distinguish from the type of subjective-continuity thus far discussed, that is, of a feeling of being the same person through the process of gradual substrate replacement—via the term “immediate subjective-continuity”, as opposed to “temporal subjective-continuity”. Immediate subjective-continuity is the capacity to feel, period. Temporal subjective-continuity is the state of feeling like the same person you were. Thus while temporal subjective-continuity inherently necessitates immediate subjective-continuity, immediate subjective-continuity does not require temporal subjective-continuity as a fundamental prerequisite.

Thus I explored variations of NRU-operational-modality that incorporate this (i.e., prosthetics on the cellular scale) particularly the informational-functionalist (i.e., computational) NRUs, as the physical-functionalist NRUs were presumed to instantiate these same emergent fields via their normative operation. The approach consisted of either (a) translating the informational output of the models into the generation of physical fields (either at the end of the process, or throughout by providing the internal area or volume of the unit with a grid composed of electrically conductive nodes, such that the voltage patterns can be physically instantiated in temporal synchrony with the computational model, or (b) constructing the computational substrate instantiating the computational model so as to generate emergent electric fields in a manner as consistent with biological operation as possible (e.g., in the brain a given neuron is never in an electrically neutral state, never completely off, but rather always in a range of values between on and off [see Chapter 2], which means that there is never a break — i.e., spatiotemporal region of discontinuity — in its emergent electric fields; these operational properties would have to be replicated by any computational substrate used to replicate biological neurons via the informationalist-functionalist approach, if the premises that it facilitates immediate subjective-continuity are correct).

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Common Misconceptions about Transhumanism – Article by G. Stolyarov II

Common Misconceptions about Transhumanism – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 26, 2013
******************************

After the publication of my review of Nassim Taleb’s latest book Antifragile, numerous comments were made by Taleb’s followers – many of them derisive – on Taleb’s Facebook page. (You can see a screenshot of these comments here.) While I will only delve into a few of the specific comments in this article, I consider it important to distill the common misconceptions that motivate them. Transhumanism is often misunderstood and maligned by who are ignorant of it – or those who were exposed solely to detractors such as John Gray, Leon Kass, and Taleb himself. This essay will serve to correct these misconceptions in a concise fashion. Those who still wish to criticize transhumanism should at least understand what they are criticizing and present arguments against the real ideas, rather than straw men constructed by the opponents of radical technological progress.

Misconception #1: Transhumanism is a religion.

Transhumanism does not posit the existence of any deity or other supernatural entity (though some transhumanists are religious independently of their transhumanism), nor does transhumanism hold a faith (belief without evidence) in any phenomenon, event, or outcome. Transhumanists certainly hope that technology will advance to radically improve human opportunities, abilities, and longevity – but this is a hope founded in the historical evidence of technological progress to date, and the logical extrapolation of such progress. Moreover, this is a contingent hope. Insofar as the future is unknowable, the exact trajectory of progress is difficult to predict, to say the least. Furthermore, the speed of progress depends on the skill, devotion, and liberty of the people involved in bringing it about. Some societal and political climates are more conducive to progress than others. Transhumanism does not rely on prophecy or mystical fiat. It merely posits a feasible and desirable future of radical technological progress and exhorts us to help achieve it. Some may claim that transhumanism is a religion that worships man – but that would distort the term “religion” so far from its original meaning as to render it vacuous and merely a pejorative used to label whatever system of thinking one dislikes. Besides, those who make that allegation would probably perceive a mere semantic quibble between seeking man’s advancement and worshipping him. But, irrespective of semantics, the facts do not support the view that transhumanism is a religion. After all, transhumanists do not spend their Sunday mornings singing songs and chanting praises to the Glory of Man.

Misconception #2: Transhumanism is a cult.

A cult, unlike a broader philosophy or religion, is characterized by extreme insularity and dependence on a closely controlling hierarchy of leaders. Transhumanism has neither element. Transhumanists are not urged to disassociate themselves from the wider world; indeed, they are frequently involved in advanced research, cutting-edge invention, and prominent activism. Furthermore, transhumanism does not have a hierarchy or leaders who demand obedience. Cosmopolitanism is a common trait among transhumanists. Respected thinkers, such as Ray Kurzweil, Max More, and Aubrey de Grey, are open to discussion and debate and have had interesting differences in their own views of the future. A still highly relevant conversation from 2002, “Max More and Ray Kurzweil on the Singularity“, highlights the sophisticated and tolerant way in which respected transhumanists compare and contrast their individual outlooks and attempt to make progress in their understanding. Any transhumanist is free to criticize any other transhumanist and to adopt some of another transhumanist’s ideas while rejecting others. Because transhumanism characterizes a loose network of thinkers and ideas, there is plenty of room for heterogeneity and intellectual evolution. As Max More put it in the “Principles of Extropy, v. 3.11”, “the world does not need another totalistic dogma.”  Transhumanism does not supplant all other aspects of an individual’s life and can coexist with numerous other interests, persuasions, personal relationships, and occupations.

Misconception #3: Transhumanists want to destroy humanity. Why else would they use terms such as “posthuman” and “postbiological”?

Transhumanists do not wish to destroy any human. In fact, we want to prolong the lives of as many people as possible, for as long as possible! The terms “transhuman” and “posthuman” refer to overcoming the historical limitations and failure modes of human beings – the precise vulnerabilities that have rendered life, in Thomas Hobbes’s words, “nasty, brutish, and short” for most of our species’ past. A species that transcends biology will continue to have biological elements. Indeed, my personal preference in such a future would be to retain all of my existing healthy biological capacities, but also to supplement them with other biological and non-biological enhancements that would greatly extend the length and quality of my life. No transhumanist wants human beings to die out and be replaced by intelligent machines, and every transhumanist wants today’s humans to survive to benefit from future technologies. Transhumanists who advocate the development of powerful artificial intelligence (AI) support either (i) integration of human beings with AI components or (ii) the harmonious coexistence of enhanced humans and autonomous AI entities. Even those transhumanists who advocate “mind backups” or “mind uploading” in an electronic medium (I am not one of them, as I explain here) do not wish for their biological existences to be intentionally destroyed. They conceive of mind uploads as contingency plans in case their biological bodies perish.

Even the “artilect war” anticipated by more pessimistic transhumanists such as Hugo de Garis is greatly misunderstood. Such a war, if it arises, would not come from advanced technology, but rather from reactionaries attempting to forcibly suppress technological advances and persecute their advocates. Most transhumanists do not consider this scenario to be likely in any event. More probable are lower-level protracted cultural disputes and clashes over particular technological developments.

Misconception #4: “A global theocracy envisioned by Moonies or the Taliban would be preferable to the kind of future these traitors to the human species have their hearts set on, because even the most joyless existence is preferable to oblivion.

The above was an actual comment on the Taleb Facebook thread. It is astonishing that anyone would consider theocratic oppression preferable to radical life extension, universal abundance, ever-expanding knowledge of macroscopic and microscopic realms, exploration of the universe, and the liberation of individuals from historical chains of oppression and parasitism. This misconception is fueled by the strange notion that transhumanists (or technological progress in general) will destroy us all – as exemplified by the “Terminator” scenario of hostile AI or the “gray goo” scenario of nanotechnology run amok. Yet all of the apocalyptic scenarios involving future technology lack the safeguards that elementary common sense would introduce. Furthermore, they lack the recognition that incentives generated by market forces, as well as the sheer numerical and intellectual superiority of the careful scientists over the rogues, would always tip the scales greatly in favor of the defenses against existential risk. As I explain in “Technology as the Solution to Existential Risk” and “Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail”,  the greatest existential risks have either always been with us (e.g., the risk of an asteroid impact with Earth) or are in humanity’s past (e.g., the risk of a nuclear holocaust annihilating civilization). Technology is the solution to such existential risks. Indeed, the greatest existential risk is fear of technology, which can retard or outright thwart the solutions to the perils that may, in the status quo, doom us as a species. As an example, Mark Waser has written an excellent commentary on the “inconvenient fact that not developing AI (in a timely fashion) to help mitigate other existential risks is itself likely to lead to a substantially increased existential risk”.

Misconception #5: Transhumanists want to turn people into the Borg from Star Trek.

The Borg are the epitome of a collectivistic society, where each individual is a cog in the giant species machine. Most transhumanists are ethical individualists, and even those who have communitarian leanings still greatly respect individual differences and promote individual flourishing and opportunity. Whatever their positions on the proper role of government in society might be, all transhumanists agree that individuals should not be destroyed or absorbed into a collective where they lose their personality and unique intellectual attributes. Even those transhumanists who wish for direct sharing of perceptions and information among individual minds do not advocate the elimination of individuality. Rather, their view might better be thought of as multiple puzzle pieces being joined but remaining capable of full separation and autonomous, unimpaired function.

My own attraction to transhumanism is precisely due to its possibilities for preserving individuals qua individuals and avoiding the loss of the precious internal universe of each person. As I expressed in Part 1 of my “Eliminating Death” video series, death is a horrendous waste of irreplaceable human talents, ideas, memories, skills, and direct experiences of the world. Just as transhumanists would recoil at the absorption of humankind into the Borg, so they rightly denounce the dissolution of individuality that presently occurs with the oblivion known as death.

Misconception #6: Transhumanists usually portray themselves “like robotic, anime-like characters”.

That depends on the transhumanist in question. Personally, I portray myself as me, wearing a suit and tie (which Taleb and his followers dislike just as much – but that is their loss). Furthermore, I see nothing robotic or anime-like about the public personas of Ray Kurzweil, Aubrey de Grey, or Max More, either.

Misconception #7: “Transhumanism is attracting devotees of a frighteningly high scientific caliber, morally retarded geniuses who just might be able to develop the humanity-obliterating technology they now merely fantasize about. It’s a lot like a Heaven’s Gate cult, but with prestigious degrees in physics and engineering, many millions more in financial backing, a growing foothold in mainstream culture, a long view of implementing their plan, and a death wish that extends to the whole human race not just themselves.

This is another statement on the Taleb Facebook thread. Ironically, the commenter is asserting that the transhumanists, who support the indefinite lengthening of human life, have a “death wish” and are “morally retarded”, while he – who opposes the technological progress needed to preserve us from the abyss of oblivion – apparently considers himself a champion of morality and a supporter of life. If ever there was an inversion of characterizations, this is it. At least the commenter acknowledges the strong technical skills of many transhumanists – but calling them “morally retarded” presupposes a counter-morality of death that should rightly be overcome and challenged, lest it sentence each of us to death. The Orwellian mindset that “evil is good” and “death is life” should be called out for the destructive and dangerous morass of contradictions that it is. Moreover, the commenter provides no evidence that any transhumanist wants to develop “humanity-obliterating technologies” or that the obliteration of humanity is even a remote risk from the technologies that transhumanists do advocate.

Misconception #8: Transhumanism is wrong because life would have no meaning without death.

Asserting that only death can give life meaning is another bizarre contradiction, and, moreover, a claim that life can have no intrinsic value or meaning qua life. It is sad indeed to think that some people do not see how they could enjoy life, pursue goals, and accumulate values in the absence of the imminent threat of their own oblivion. Clearly, this is a sign of a lack of creativity and appreciation for the wonderful fact that we are alive. I delve into this matter extensively in my “Eliminating Death” video series. Part 3 discusses how indefinite life extension leaves no room for boredom because the possibilities for action and entertainment increase in an accelerating manner. Parts 8 and 9 refute the premise that death gives motivation and a “sense of urgency” and make the opposite case – that indefinite longevity spurs people to action by making it possible to attain vast benefits over longer timeframes. Indefinite life extension would enable people to consider the longer-term consequences of their actions. On the other hand, in the status quo, death serves as the great de-motivator of meaningful human endeavors.

Misconception #9: Removing death is like removing volatility, which “fragilizes the system”.

This sentiment was an extrapolation by a commenter on Taleb’s ideas in Antifragile. It is subject to fundamentally collectivistic premises – that the “volatility” of individual death can be justified if it somehow supports a “greater whole”. (Who is advocating the sacrifice of the individual to the collective now?)  The fallacy here is to presuppose that the “greater whole” has value in and of itself, apart from the individuals comprising it. An individualist view of ethics and of society holds the opposite – that societies are formed for the mutual benefit of participating individuals, and the moment a society turns away from that purpose and starts to damage its participants instead of benefiting them, it ceases to be desirable. Furthermore, Taleb’s premise that suppression of volatility is a cause of fragility is itself dubious in many instances. It may work to a point with an individual organism whose immune system and muscles use volatility to build adaptive responses to external threats. However, the possibility of such an adaptive response requires very specific structures that do not exist in all systems. In the case of human death, there is no way in which the destruction of a non-violent and fundamentally decent individual can provide external benefits of any kind worth having. How would the death of your grandparents fortify the mythic “society” against anything?

Misconception #10: Immortality is “a bit like staying awake 24/7”.

Presumably, those who make this comparison think that indefinite life would be too monotonous for their tastes. But, in fact, humans who live indefinitely can still choose to sleep (or take vacations) if they wish. Death, on the other hand, is irreversible. Once you die, you are dead 24/7 – and you are not even given the opportunity to change your mind. Besides, why would it be tedious or monotonous to live a life full of possibilities, where an individual can have complete discretion over his pursuits and can discover as much about existence as his unlimited lifespan allows? To claim that living indefinitely would be monotonous is to misunderstand life itself, with all of its variety and heterogeneity.

Misconception #11: Transhumanism is unacceptable because of the drain on natural resources that comes from living longer.

This argument presupposes that resources are finite and incapable of being augmented by human technology and creativity. In fact, one era’s waste is another era’s treasure (as occurred with oil since the mid-19th century). As Julian Simon recognized, the ultimate resource is the human mind and its ability to discover new ways to harness natural laws to human benefit. We have more resources known and accessible to us now – both in terms of food and the inanimate bounties of the Earth – than ever before in recorded history. This has occurred in spite – and perhaps because of – dramatic population growth, which has also introduced many new brilliant minds into the human species. In Part 4 of my “Eliminating Death” video series, I explain that doomsday fears of overpopulation do not hold, either historically or prospectively. Indeed, the progress of technology is precisely what helps us overcome strains on natural resources.

Conclusion

The opposition to transhumanism is generally limited to espousing some variations of the common fallacies I identified above (with perhaps a few others thrown in). To make real intellectual progress, it is necessary to move beyond these fallacies, which serve as mental roadblocks to further exploration of the subject – a justification for people to consider transhumanism too weird, too unrealistic, or too repugnant to even take seriously. Detractors of transhumanism appear to recycle these same hackneyed remarks as a way to avoid seriously delving into the actual and genuinely interesting philosophical questions raised by emerging technological innovations. These are questions on which many transhumanists themselves hold sincere differences of understanding and opinion. Fundamentally, though, my aim here is not to “convert” the detractors – many of whose opposition is beyond the reach of reason, for it is not motivated by reason. Rather, it is to speak to laypeople who are not yet swayed one way or the other, but who might not have otherwise learned of transhumanism except through the filter of those who distort and grossly misunderstand it. Even an elementary explication of what transhumanism actually stands for will reveal that we do, in fact, strongly advocate individual human life and flourishing, as well as technological progress that will uplift every person’s quality of life and range of opportunities. Those who disagree with any transhumanist about specific means for achieving these goals are welcome to engage in a conversation or debate about the merits of any given pathway. But an indispensable starting point for such interaction involves accepting that transhumanists are serious thinkers, friends of human life, and sincere advocates of improving the human condition.

Philosophy Lives – Contra Stephen Hawking – Video by G. Stolyarov II

Philosophy Lives – Contra Stephen Hawking – Video by G. Stolyarov II

Mr. Stolyarov’s refutation of Stephen Hawking’s statement that “philosophy is dead.”

In his 2010 book The Grand Design, cosmologist and theoretical physicist Stephen Hawking writes that science has displaced philosophy in the enterprise of discovering truth. While I have great respect for Hawking both in his capacities as a physicist and in his personal qualities — his advocacy of technological progress and his determination and drive to achieve in spite of his debilitating illness — the assertion that the physical sciences can wholly replace philosophy is mistaken. Not only is philosophy able to address questions outside the scope of the physical sciences, but the coherence and validity of scientific approaches itself rests on a philosophical foundation that was not always taken for granted — and still is not in many circles.

References
– “Philosophy Lives – Contra Stephen Hawking” – Essay by G. Stolyarov II
– “The Grand Design (book)” – Wikipedia
– “Stephen Hawking” – Wikipedia

Philosophy Lives – Contra Stephen Hawking – Article by G. Stolyarov II

Philosophy Lives – Contra Stephen Hawking – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 1, 2013
******************************

In his 2010 book The Grand Design, cosmologist and theoretical physicist Stephen Hawking writes that science has displaced philosophy in the enterprise of discovering truth. While I have great respect for Hawking both in his capacities as a physicist and in his personal qualities – his advocacy of technological progress and his determination and drive to achieve in spite of his debilitating illness – the assertion that the physical sciences can wholly replace philosophy is mistaken. Not only is philosophy able to address questions outside the scope of the physical sciences, but the coherence and validity of scientific approaches itself rests on a philosophical foundation that was not always taken for granted – and still is not in many circles.

Hawking writes, “Living in this vast world that is by turns kind and cruel, and gazing at the immense heavens above, people have always asked a multitude of questions: How can we understand the world in which we find ourselves? How does the universe behave? What is the nature of reality? Where did all this come from? Did the universe need a creator? Most of us do not spend most of our time worrying about these questions, but almost all of us worry about them some of the time. Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.

I hesitate to speculate why Hawking considers philosophy to be “dead” – but perhaps this view partly arises from frustration at the non-reality-oriented teachings of many postmodernist philosophers who still prevail in many academic and journalistic circles. Surely, those who deny the comprehensibility of reality and allege that it is entirely a societal construction do not aid in the quest for discovery and understanding of what really exists. Likewise, our knowledge cannot be enhanced by those who deny that there exist systematic and specific methods that are graspable by human reason and that can be harnessed for the purposes of discovery. It is saddening indeed that prominent philosophical figures have embraced anti-realist positions in metaphysics and anti-rational, anti-empirical positions in epistemology. Physicists, in their everyday practice, necessarily rely on external observational evidence and on logical deductions from the empirical data. In this way, and to the extent that they provide valid explanations of natural phenomena, they are surely more reality-oriented than most postmodernist philosophers. Yet philosophy does not need to be this way – and, indeed, philosophical schools of thought throughout history and in the present day are not only compatible with the scientific approach to reality, but indispensable to it.

Contrary to the pronouncements of prominent postmodernists, a venerable strain of thought – dating back to at least Aristotle and extending all the way to today’s transhumanists, Objectivists, and natural-law thinkers – holds that an objective reality exists, that it can be understood through systematic observation and reason, and that its understanding should be pursued by all of us. This is the philosophical strain responsible for the accomplishments of Classical Antiquity and the progress made during the Renaissance, the Enlightenment, the Industrial Revolution, and the Information Revolution. While such philosophy is not the same as the physical sciences, the physical sciences rely on it to the extent that they embrace the approach known as the scientific method, which itself rests on philosophical premises. These premises include the existence of an external reality independent of the wishes and imagination of any observer, the existence of a definite identity of any given entity at any given time, the reliance on identical conditions producing identical outcomes, the principles of causation and non-contradiction, and the ability of human beings to systematically alter outcomes in the physical world by understanding its workings and modifying physical systems accordingly. This latter principle – that, in Francis Bacon’s words, “Nature, to be commanded, must be obeyed” – was the starting point for the Scientific Revolution of the 17th Century, which inaugurated subsequent massive advances in technology, standards of living, and human understanding of the universe.  Even those scientists who do not acknowledge or explicitly reject the importance of philosophy nonetheless implicitly rely on these premises in the very conduct of their scientific work – to the extent that such work accurately describes reality. These premises are not the only ones possible – but they are the only ones that are fully right. Alternatives – including reliance on alleged supernatural revelation, wishful thinking, and unconditional deference to authority – have been tried time and again, only to result in stagnation and mental traps that prevented substantive improvements to the human condition.

But there is more. Not only are the physical sciences without a foundation if philosophy is to be ignored, but the very reason for pursuing them remains unaddressed without the branch of philosophy that focuses on what we ought to do: ethics. Contrary to those who would posit an insurmountable “is-ought” gap, ethics can indeed be derived from the facts of reality, but not solely by the tools of physics, chemistry, biology, or any others of the “hard” physical sciences. An additional element is required: the fact that we ourselves exist as rational, conscious beings, who are capable of introspection and of analysis of external data. From the physical sciences we can derive ways to sustain and improve our material well-being – sometimes our very survival. But only ethics can tell us that we ought to pursue such survival – a conclusion we reach through introspection and logical reasoning. No experiment, no test is needed to tell us that we ought to keep living. This conclusion arises as antecedent to a consistent pursuit of any action at all; to achieve any goal, we must be alive. To pursue death, the opposite of life, contradicts the very notion of acting, which has life as a prerequisite.  Once we have accepted that premise, an entire system of logical deductions follows with regard to how we ought to approach the external world – the pursuit of knowledge, interactions with others, improvement of living conditions, protection against danger. The physical sciences can provide many of the empirical data and regularities needed to assess alternative ways of living and to develop optimal solutions to human challenges. But ethics is needed to keep the goals of scientific study in mind. The goals should ultimately relate to ways to enhance human well-being. If the pursuit of human well-being – consistent with the imperative of each individual to continue living – is abandoned, then the physical sciences alone cannot provide adequate guidance. Indeed, they can be utilized to produce horrors – as the development of nuclear weapons in the 20th century exemplified. Geopolitical considerations of coercive power and nationalism were permitted to overshadow humanistic considerations of life and peace, and hundreds of thousands of innocents perished due to a massive government-sponsored science project, while the fate of human civilization hung in the balance for over four decades.

The questions cited by Hawking are indeed philosophical questions, at least in part. Aspects of these questions, while they are broadly reliant on the existence of an objective reality, do not require specific experiments to answer. Rather, like many of the everyday questions of our existence, they rely only on the ubiquitous inputs of our day-to-day experience, generalized within our minds and formulated as starting premises for a logical deductive process. The question “How can we understand the world in which we find ourselves? has different answers based on the realm of focus and endeavor. Are we looking to understand the function of a mechanism, or the origin of a star? Different tools are required for each, but systematic experimentation and observation would be required in each case. This is an opening for the physical sciences and the scientific method. There are, however, ubiquitous observations about our everyday world that can be used as inputs into our decision-making – a process we engage in regularly as we navigate a room, eat a meal, engage in conversation or deliberation, or transport any object whatsoever. Simply as a byproduct of routine living, these observations provide us with ample data for a series of logical deductions and inferences which do not strictly belong to any scientific branch, even though specific parts of our world could be better understood from closer scientific observation.

The questionHow does the universe behave?actually arises in part from a philosophical presupposition that “the universe” is a single entity with any sort of coordinated behavior whatsoever. An alternative view – which I hold – is that the word “universe” is simply convenient mental shorthand for describing the totality of every single entity that exists, in lieu of actually enumerating them all. Thus, while each entity has its own definite nature, “the universe” may not have a single nature or behavior. Perhaps a more accurate framing of that question would be, “What attributes or behaviors are common to all entities that exist?” To answer that question, a combination of ubiquitous observation and scientific experimentation is required. Ubiquitous observation tells us that all entities are material, but only scientific experimentation can tell us what the “building blocks” of matter are. Philosophy alone cannot recommend any model of the atom or of subatomic particles, among multiple competing non-contradictory models. Philosophy can, however, rightly serve to check the logical coherence of any particular model and to reject erroneous interpretations of data which produce internally contradictory answers. Such rejection does not mean that the data are inaccurate, or even that a particular scientific theory cannot predict the behavior of entities – but rather that any verbal understanding of the accurate data and predictive models should also be consistent with logic, causation, and everyday human experience. At the very least, if a coherent verbal understanding is beyond our best efforts at present, philosophy should be vigilant against the promulgation of incoherent verbal understandings. It is better to leave certain scientific models as systems of mathematical equations, uncommented on, than to posit evidently false interpretations that undermine laypeople’s view of the validity of our very existence and reasoning.

After all – to return to the ethical purpose of science – one major goal of scientific inquiry is to understand and explain the world we live in and experience on a daily basis. If any scientific model is said to result in the conclusion that our world does not ‘really’ exist or that our entire experience is illusory (rather than just occasional quirks in our biology, such as those which produce optical illusions, misleading us, in an avoidable manner, under specific unusual circumstances), then it is the philosophical articulation of that model that is flawed. The model itself may be retained in another form – such as mathematical notation – that can be used to predict and study phenomena which continue to defy verbal understanding, with the hope that someday a satisfactory verbal understanding will be attained. Without this philosophic vigilance, scientific breakthroughs may be abused by charlatans for the purpose of misleading people into ruining their lives. As a prominent example of this, multiple strains of mysticism have arisen out of bad philosophical interpretations of quantum mechanics – for instance, the belief, articulated in such pseudo-self-help books as The Secret, that people can mold reality with their thoughts alone and that, instead of working hard and thinking rationally, they can become immensely wealthy and cure themselves of cancer just by wanting it enough. Without a rigorous philosophical defense of reason and objective reality, either by scientists themselves or by their philosopher allies, this mystical nonsense will render scientific enterprises increasingly misunderstood by and isolated from large segments of the public, who will become increasingly superstitious, anti-intellectual, and reliant on wishful thinking.

The question “What is the nature of reality?” is a partly philosophical and partly scientific one. The philosophical dimension – metaphysics – is needed to posit that an objective, understandable reality exists at all. The scientific dimension comes into play in comprehending specific real entities, from stars to biological organisms – relying on the axioms and derivations of metaphysics for the experimental study of such entities to even make sense or promise to produce reliable results. Philosophy cannot tell you what the biological structure of a given organism is like, but it can tell you that there is one, and that praying or wishing really hard to understand it will not reveal its identity to you. Philosophy can also tell you that, in the absence of external conditions that would dramatically affect that biological structure, it will not magically change into a dramatically different structure.

The questions “Where did all this come from? Did the universe need a creator?” are scientific only to a point. When exploring the origin of a particular planet or star – or of life on Earth – they are perfectly amenable to experimentation and to extrapolation from historical evidence. Hence, the birth of the solar system, abiogenesis, and biological evolution are all appropriate subjects of study for the hard sciences. Moreover, scientific study can address the question of whether a particular object needed to have a creator and can, for instance, conclude that a mechanical watch needed to have a watchmaker, but no analogous maker needed to exist to bring about the structure of a complex biological organism. However, if the question arises as to whether existence itself had an origin or needed a creator, this is a matter for philosophy. Indeed, rational philosophy can point out the contradiction in the view that existence itself could ever not have existed, or that a creator outside of existence (and, by definition, non-existent at that time) could have brought existence into being.

Interestingly enough, Hawking comes to a similar conclusion – that cosmological history can be understood by a model that not include a sentient creator. I am glad that Hawking holds this view, but this specific conclusion does not require theoretical or experimental physics to validate; it simply requires a coherent understanding of terms such as “existence”, “universe”, and “creator”. Causation and non-contradiction both preclude the possibility of any ex nihilo creation. As for the question of whether there exist beings capable of vast cosmic manipulations and even the design of life forms – that is an empirical matter. Perhaps someday such beings will be discovered; perhaps someday humans will themselves become such beings through mastery of science and technology. The first steps have already been taken – for instance, with Craig Venter’s design of a synthetic living bacterium. Ethics suggests to me that this mastery of life is a worthwhile goal and that its proponents – transhumanists – should work to persuade those philosophers and laypeople who disagree.

More constructive dialogue between rational scientists and rational philosophers is in order, for the benefit of both disciplines. Philosophy can serve as a check on erroneous verbal interpretations of scientific discoveries, as well as an ethical guide for the beneficial application of those discoveries. Science can serve to provide observations and regularities which assist in the achievement of philosophically motivated goals. Furthermore, science can serve to disconfirm erroneous philosophical positions, in cases where philosophy ventures too far into specific empirical predictions which experimentation and targeted observation might falsify. To advance such fruitful interactions, it is certainly not productive to proclaim that one discipline or another is “dead”. I will be the first to admit that contemporary philosophy, especially of the kind that enjoys high academic prestige, is badly in need of reform. But such reform is only possible after widespread acknowledgment that philosophy does have a legitimate and significant role, and that it can do a much better job in fulfilling it.

Update to Resources on Indefinite Life Extension – July 10, 2012

Update to Resources on Indefinite Life Extension – July 10, 2012

TRA’s Resources on Indefinite Life Extension page has been enhanced over the past two months with links to numerous fascinating articles and videos.

Articles

– “Scientists turn skin cells into beating heart muscle” – Kate Kelland – Reuters – May 22, 2012

– “Is Amyloidosis the Limiting Factor for Human Lifespan?” – Lyle J. Dennis, M.D. – Extreme Longevity – May 22, 2012

– “Israeli scientists create beating heart tissue from skin cells” – The Telegraph – May 23, 2012

– “Paralyzed rats walk again in Swiss lab study” – Chris Wickham – MSNBC.com – May 31, 2012

– “New Cancer Drugs Use Body’s Own Defenses” – Ron Winslow – Wall Street Journal – June 1, 2012

– “Bristol immune drug shows promise in three cancers” – Julie Steenhuysen – Reuters – June 2, 2012

– “Prostate cancer drug so effective trial stopped” – Victoria Colliver – San Francisco Chronicle – June 2, 2012

– “New ‘smart bomb’ drug attacks breast cancer, doctors say” – Associated Press – June 3, 2012

– “Alzheimer’s vaccine trial a success” – Karolinska Institutet – June 6, 2012

– “Man Cured of AIDS: ‘I Feel Good’” – Carrie Gann – ABC News – June 8, 2012

– “Artificial Lifeforms Promise Cleaner World, Healthier Humans” – Dick Pelletier – Positive Futurist – June 9, 2012

– “Secret of ageing found: Japanese scientists pave way to everlasting life” – RT – June 9, 2012

– “How aging normal cells fuel tumor growth and metastasis” – Thomas Jefferson University – June 14, 2012

– “People Who Justify Aging are Profoundly Wrong – Aging is Abhorrent” – Maria Konovalenko – Institute for Ethics & Emerging Technologies – June 14, 2012

– “Scientists tie DNA repair to key cell signaling network” – University of Texas Medical Branch at Galveston – June 15, 2012

– “Deciding How We Age as We Age” – Seth Cochran – h+ Magazine – June 19, 2012

– “How we die (in one chart)” – Sarah Kliff – Washington Post – June 22, 2012

– “Modified humans: the most cost-efficient way to colonize space” – Dick Pelletier – Positive Futurist – June 2012

– “Japanese Scientists Grow Human Liver From Stem Cells” – Reuters and Singularity Weblog – June 2012

– “Why Do Naked Mole Rats Live So Long? Do they hold the key to human life extension?” – Maria Konovalenko – Institute for Ethics & Emerging Technologies – June 29, 2012

– “Scientists Develop Alternative to Gene Therapy” – ScienceDaily – Scripps Research Institute – July 1, 2012

– “How to live beyond 100” – Lucy Wallis – BBC News – July 2, 2012

– “Earth 2050-2100: longer lives; new energy; FTL travel; global village” – Dick Pelletier – Positive Futurist – July 3, 2012

– “Scientists discover bees can ‘turn back time,’ reverse brain aging” – Phys.org – Arizona State University – July 3, 2012

– “Secret formula may be key to reverse aging” – Mike Holfeld – Click Orlando – July 4, 2012

– “Is there a biological limit to longevity?” – Aubrey de Grey – KurzweilAI – July 5, 2012

– “Demystifying the immortality of cancer cells” – Medical Xpress – July 5, 2012

– “Suggesting a Test of Rapamycin and Metformin Together” – Reason – FightAging.org – July 5, 2012

– “Earth 2050-2100: Longer Lives; New Energy; FTL Travel; Global Village” – Dick Pelletier – Positive Futurist – July 7, 2012

Videos

Aubrey de Grey

Aubrey de Grey – Aging & Suffering – Interview with Adam Ford – May 31, 2012

Nikola Danaylov (Socrates)

Anders Sandberg on Singularity 1 on 1: We Are All Amazingly Stupid, But We Can Get Better – May 27, 2012

Hugo de Garis on Singularity 1 on 1: Are We Building Gods or Terminators? – June 2012