Browsed by
Category: Technology

Against Monsanto, For GMOs – Article by G. Stolyarov II

Against Monsanto, For GMOs – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
June 9, 2013
******************************

                The depredations of the multinational agricultural corporation Monsanto are rightly condemned by many. Monsanto is a prominent example of a crony corporation – a company that bolsters its market dominance not through honest competition and innovation, but through the persistent use of the political and legal system to enforce its preferences against its competitors and customers. Most outrageous is Monsanto’s stretching of patents beyond all conceivable limits – attempting to patent genes and life forms and to forcibly destroy the crops of farmers who replant seeds from crops originally obtained from Monsanto.

                Yet because Monsanto is one of the world’s leading producers of genetically modified crops, campaigners who oppose all genetically modified organisms (GMOs) often use Monsanto as the poster child for the problems with GMOs as a whole. The March Against Monsanto, which took place in cities worldwide in late May of 2013, is the most recent prominent example of this conflation. The blanket condemnation of GMOs because of Monsanto’s misbehavior is deeply fallacious. The policy of a particular company does not serve to discredit an entire class of products, just because that company produces those products – even if it could be granted that the company’s actions result in its own products being more harmful than they would otherwise be.

                GMOs, in conventional usage, are any life forms which have been altered through techniques more advanced than the kind of selective breeding which has existed for millennia. In fact, the only material distinction between genetic engineering and selective breeding is in the degree to which the procedure is targeted toward specific features of an organism. Whereas selective breeding is largely based on observation of the organism’s phenotype, genetic engineering relies on more precise manipulation of the organism’s DNA. Because of its ability to more closely focus on specific desirable or undesirable attributes, genetic engineering is less subject to unintended consequences than a solely macroscopic approach. Issues of a particular company’s abuse of the political system and its attempts to render the patent system ever more draconian do not constitute an argument against GMOs or the techniques used to create them.

                Consider that Monsanto’s behavior is not unique; similar depredations are found throughout the status quo of crony corporatism, where many large firms thrive not on the basis of merit, but on the basis of political pull and institutionalized coercion. Walt Disney Corporation has made similar outrageous (and successful) attempts to extend the intellectual-property system solely for its own benefit. The 1998 Copyright Term Extension Act was primarily motivated by Disney’s lobbying to prevent the character of Mickey Mouse from entering the public domain. Yet are all films, and all animated characters, evil or wrong because of Disney’s manipulation of the legal system instead of competing fairly and honestly on the market? Surely, to condemn films on the basis of Disney’s behavior would be absurd.

                Consider, likewise, Apple Corporation, which has attempted to sue its competitors’ products out of existence and to patent the rectangle with rounded corners – a geometric shape which is no less basic an idea in mathematics than a trapezoid or an octagon. Are all smartphones, tablet computers, MP3 players, and online music services – including those of Apple’s competitors – wrong and evil solely because of Apple’s unethical use of the legal system to squelch competition? Surely not! EA Games, until May 2013, embedded crushingly restrictive digital-rights management (DRM) into its products, requiring a continuous Internet connection (and de facto continual monitoring of the user by EA) for some games to be playable at all. Are all computer games and video games evil and wrong because of EA’s intrusive anti-consumer practices? Should they all be banned in favor of only those games that use pre-1950s-era technology – e.g., board games and other table-top games? If the reader does not support the wholesale abolition, or even the limitation, of films, consumer electronics, and games as a result of the misbehavior of prominent makers of these products, then what rationale can there possibly be for viewing GMOs differently?

                Indeed, the loathing of all GMOs stems from a more fundamental fallacy, for which any criticism of Monsanto only provides convenient cover. That fallacy is the assumption that “the natural” – i.e., anything not affected by human technology, or, more realistically, human technology of sufficiently recent origin – is somehow optimal for human purposes or simply for its own sake. While it is logically conceivable that some genetic modifications to organisms could render them more harmful than they would otherwise be (though there has never been any evidence of such harms arising despite the trillions of servings of genetically modified foods consumed to date), the condemnation of all genetic modifications using techniques from the last 60 years is far more sweeping than this. Such condemnation is not and cannot be scientific; rather, it is an outgrowth of the indiscriminate anti-technology agenda of the anti-GMO campaigners. A scientific approach, based on experimentation, empirical observation, and the immense knowledge thus far amassed regarding chemistry and biology, might conceivably give rise to a sophisticated classification of GMOs based on gradations of safety, safe uses, unsafe uses, and possible yet-unknown risks. The anti-GMO campaigners’ approach, on the other hand, can simply be summarized as “Nature good – human technology bad” – not scientific or discerning at all.

                The reverence for purportedly unaltered “nature” completely ignores the vicious, cruel, appallingly wasteful (not even to mention suboptimal) conditions of any environment untouched by human influence. After all, 99.9% of all species that ever existed are extinct – the vast majority from causes that arose long before human beings evolved. The plants and animals that primitive hunter-gatherers consumed did not evolve with the intention of providing optimal nutrition for man; they simply happened to be around, attainable for humans, and nutritious enough that humans did not die right away after consuming them – and some humans (the ones that were not poisoned, or killed hunting, or murdered by their fellow men) managed to survive to reproductive age by eating these “natural” foods. Just because the primitive “paleo” diet of our ancestors enabled them to survive long enough to trigger the chain of events that led to us, does not render their lives, or their diets, ideal for emulation in every aspect. We can do better. We must do better – if protection of large numbers of human beings from famine, drought, pests, and prohibitive costs of food is to be considered a moral priority in the least. By depriving human beings of the increased abundance, resilience, and nutritional content that only the genetic modification of foods can provide, anti-GMO campaigners would sentence millions – perhaps billions – of humans to the miserable subsistence conditions and tragically early deaths of their primeval forebears, of whom the Earth could support only a few million without human agricultural interventions.

                We do not need to like Monsanto in order to embrace the life-saving, life-enhancing potential of GMOs. We need to consider the technology involved in GMOs on its own terms, imagining how we would view it if it could be delivered by economic arrangements we would prefer. As a libertarian individualist, I advocate for a world in which GMOs could be produced by thousands of competing firms, each fairly trying to win the business of consumers through the creation of superior products which add value to people’s lives. If you are justifiably concerned about the practices of Monsanto, consider working toward a world like that, instead of a world where the promise of GMOs is denied to the billions who currently owe their very existences to human technology and ingenuity.

Tapping the Transcendence Drive – Article by D.J. MacLennan

Tapping the Transcendence Drive – Article by D.J. MacLennan

The New Renaissance Hat
D. J. MacLennan
June 2, 2013
******************************

What do we want? No, I mean, what do we really want?

Your eyes flick back and forth between your smartphone and your iPad; your coffee cools on the dusty coaster beside the yellowing PC monitor; you momentarily look to the green vista outside your window but don’t fully register it; Facebook fade-scrolls the listless postings of tens of phase-locked ‘friends’, while the language-association areas of your brain chisel at your clumsy syntax, relentlessly sculpting it down to the 140-character limit of your next Twitter post.

The noise, the noise; the pink and the brown, the blue and the white. What do we want? How do we say it?

As I am a futurist, it’s understandable that people sometimes ask me what I can tell them about the future. What do I say? How about, “Well, it won’t be the same as the past”? On many levels, this is an unsatisfying answer. But, importantly, it is neither a stupid nor an empty one. If it sounds a bit Zen, that is only because people as used to a mode of thinking about the future that has it looking quite a lot like the past but with more shiny bits and bigger (and much flatter) flatscreens.

What I prefer to say, when there is more time available for the conversation, is, “It depends on what you, and others, want, and upon what you do to get those things.” Another unsatisfying response?

Where others see shiny stuff, I see the physical manifestations of drives. After all, what are Facebook, Twitter, and iPads but manifestations of drives? Easy, isn’t it? We can now glibly state that Twitter and Facebook are manifestations of the drive to communicate, and that the iPad is a manifestation of the desire to possess shiny stuff that does a slick job of enabling us to better pursue our recreational, organizational, and communicational drives.

There are, however, problems with this way of looking at drives. If, for example, we assume, based on the evidence we see from the boom in the use of communication technologies, that people have a strong drive to stay in touch with each other, we will simply churn out more and more of the same kinds of communication devices and platforms. If, on the other hand, we look at what is the overarching drive driving the desire to communicate, we can better address the real needs of the end user.

PongAs another example, we look back to early computer gaming. What was the main drive of the teenager playing Pong on Atari’s first arcade version of the game, released in 1972? If you asked this question to an impartial observer in 1972, they might well have opined that the fun of Pong stemmed from the fact that it was like table tennis; table tennis is fun, so a bleepy digital version of it in a big yellow box should also be fun. While not completely incorrect, such an opinion would be based solely upon the then-current gaming context. In following the advice of such an observer, an arcade-game manufacturer might have invested, and probably lost, an enormous amount of money in producing more and more electronic versions of simple tabletop games. But, fortunately for the computer-game industry, many manufacturers realized that the fun of arcade games was largely in the format, and so began to abandon the notion that they should be digital representations of physical games.

If we jump to a modern MMORPG game involving player avatars, such as World of Warcraft, we find a situation radically different from that which prevailed in 1972, but I would argue that many observers still make the same kinds of mistakes in extrapolating the drives of the players. It’s all about “recreation” and “role-playing”, right?

I think that many technology manufacturers underestimate and misunderstand our true drives. I admit to being an optimist on such matters, but what if, just for a moment, we assume that the drives of technology-obsessed human beings (even the ones playing Angry Birds, or posting drunken nonsense on Facebook) are actually grand and noble ones? What if we really think about what it is that they are trying to do? Now we begin to get somewhere. We can then see the Facebook postings as an individual’s yearning for registration of his or her existence; a drive towards self-actualization with a voice augmented beyond the hoarse squeak of the physical one. We can see individuals’ appreciation of the clean lines of their iPads as a desire for rounded-corner order in a world of filth and tangle. We can see their enjoyment of moving their avatar around World of Warcraft as the beginnings of a massive stretching of their concept of self, to a point where it might break open and merge colorfully with the selves of others.

E-Book Reader

One hundred and forty characters: I know it doesn’t look much like a drive for knowledge and transcendence, but so what? Pong didn’t look much like Second Life; the telegraph didn’t look much like the iPad. The past is a poor guide to the future. A little respect for, and more careful observation of, what might be the true drives of the technology-obsessed would, I think, help us to create a future enhanced by enabling technologies, and not one awash with debilitating noise.

D.J. MacLennan is a futurist writer and entrepreneur, and is signed up with Alcor for cryonic preservation. He lives in, and works from, a modern house overlooking the sea on the coast of the Isle of Skye, in the Highlands of Scotland.

See more of D.J.’s writing at extravolution.com and futurehead.com.

Mitochondrially Targeted Antioxidant SS-31 Reverses Some Measures of Aging in Muscle – Article by Reason

Mitochondrially Targeted Antioxidant SS-31 Reverses Some Measures of Aging in Muscle – Article by Reason

The New Renaissance Hat
Reason
May 26, 2013
******************************

Originally published on the Fight Aging! website.

Antioxidants of the sort you can buy at the store and consume are pretty much useless: the evidence shows us that they do nothing for health, and may even work to block some beneficial mechanisms. Targeting antioxidant compounds to the mitochondria in our cells is a whole different story, however. Mitochondria are swarming bacteria-like entities that produce the chemical energy stores used to power cellular processes. This involves chemical reactions that necessarily generate reactive oxygen species (ROS) as a byproduct, and these tend to react with and damage protein machinery in the cell. The machinery that gets damaged the most is that inside the mitochondria, of course, right at ground zero for ROS production. There are some natural antioxidants present in mitochondria, but adding more appears to make a substantial difference to the proportion of ROS that are soaked up versus let loose to cause harm.

If mitochondria were only trivially relevant to health and longevity, this wouldn’t be a terribly interesting topic, and I wouldn’t be talking about it. The evidence strongly favors mitochondrial damage as an important contribution to degenerative aging, however. Most damage in cells is repaired pretty quickly, and mitochondria are regularly destroyed and replaced by a process of division – again, like bacteria. Some rare forms of mitochondrial damage persist, however, eluding quality-control mechanisms and spreading through the mitochondrial population in a cell. This causes cells to fall into a malfunctioning state in which they export massive quantities of ROS out into surrounding tissue and the body at large. As you age, ever more of your cells suffer this fate.

In recent years a number of research groups have been working on ways to deliver antioxidants to the mitochondria, some of which are more relevant to future therapies than others. For example gene therapies to boost levels of natural mitochondrial antioxidants like catalase are unlikely to arrive in the clinic any time soon, but they serve to demonstrate significance by extending healthy life in mice. A Russian research group has been working with plastinquinone compounds that can be ingested and then localize to the mitochondria, and have shown numerous benefits to result in animal studies of the SkQ series of drug candidates.

US-based researchers have been working on a different set of mitochondrially targeted antioxidant compounds, with a focus on burn treatment. However, they recently published a paper claiming reversal of some age-related changes in muscle tissue in mice using their drug candidate SS-31. Note that this is injected, unlike SkQ compounds:

Mitochondrial targeted peptide rapidly improves mitochondrial energetics and skeletal muscle performance in aged mice

Quote:

Mitochondrial dysfunction plays a key pathogenic role in aging skeletal muscle resulting in significant healthcare costs in the developed world. However, there is no pharmacologic treatment to rapidly reverse mitochondrial deficits in the elderly. Here we demonstrate that a single treatment with the mitochondrial targeted peptide SS-31 restores in vivo mitochondrial energetics to young levels in aged mice after only one hour.

Young (5 month old) and old (27 month old) mice were injected intraperitoneally with either saline or 3 mg/kg of SS-31. Skeletal muscle mitochondrial energetics were measured in vivo one hour after injection using a unique combination of optical and 31 P magnetic resonance spectroscopy. Age-related declines in resting and maximal mitochondrial ATP production, coupling of oxidative phosphorylation (P/O), and cell energy state (PCr/ATP) were rapidly reversed after SS-31 treatment, while SS-31 had no observable effect on young muscle.

These effects of SS-31 on mitochondrial energetics in aged muscle were also associated with a more reduced glutathione redox status and lower mitochondrial [ROS] emission. Skeletal muscle of aged mice was more fatigue resistant in situ one hour after SS-31 treatment and eight days of SS-31 treatment led to increased whole animal endurance capacity. These data demonstrate that SS-31 represents a new strategy for reversing age-related deficits in skeletal muscle with potential for translation into human use.

So what is SS-31? If look at the publication history for these authors you’ll find a burn-treatment-focused open-access paper that goes into a little more detail and a 2008 review paper that covers the pharmacology of the SS compounds:

Quote:

The SS peptides, so called because they were designed by Hazel H. Sezto and Peter W. Schiler, are small cell-permeable peptides of less than ten amino acid residues that specifically target to inner mitochondrial membrane and possess mitoprotective properties. There have been a series of SS peptides synthesized and characterized, but for our study, we decided to use SS-31 peptide (H-D-Arg-Dimethyl Tyr-Lys-Phe-NH2) for its well-documented efficacy.

Studies with isolated mitochondrial preparations and cell cultures show that these SS peptides can scavenge ROS, reduce mitochondrial ROS production, and inhibit mitochondrial permeability transition. They are very potent in preventing apoptosis and necrosis induced by oxidative stress or inhibition of the mitochondrial electron transport chain. These peptides have demonstrated excellent efficacy in animal models of ischemia-reperfusion, neurodegeneration, and renal fibrosis, and they are remarkably free of toxicity.

Given the existence of a range of different types of mitochondrial antioxidant and research groups working on them, it seems that we should expect to see therapies emerge into the clinic over the next decade. As ever, the regulatory regime will ensure that they are only approved for use in treatment of specific named diseases and injuries such as burns, however. It’s still impossible to obtain approval for a therapy to treat aging in otherwise healthy individuals in the US, as the FDA doesn’t recognize degenerative aging as a disease. The greatest use of these compounds will therefore occur via medical tourism and in a growing black market for easily synthesized compounds of this sort.

In fact, any dedicated and sufficiently knowledgeable individual could already set up a home chemistry lab, download the relevant papers, and synthesize SkQ or SS compounds. That we don’t see this happening is, I think, more of a measure of the present immaturity of the global medical tourism market than anything else. It lacks an ecosystem of marketplaces and review organizations that would allow chemists to safely participate in and profit from regulatory arbitrage of the sort that is ubiquitous in recreational chemistry.

Reason is the founder of The Longevity Meme (now Fight Aging!). He saw the need for The Longevity Meme in late 2000, after spending a number of years searching for the most useful contribution he could make to the future of healthy life extension. When not advancing the Longevity Meme or Fight Aging!, Reason works as a technologist in a variety of industries.  

This work is reproduced here in accord with a Creative Commons Attribution license.  It was originally published on FightAging.org.

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the fifth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first four chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, and “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“.
***

Morphological Changes for Neural Plasticity

The finished physical-functionalist units would need the ability to change their emergent morphology not only for active modification of single-neuron functionality but even for basic functional replication of normative neuron behavior, by virtue of needing to take into account neural plasticity and the way that morphological changes facilitate learning and memory. My original approach involved the use of retractable, telescopic dendrites and axons (with corresponding internal retractable and telescopic dendritic spines and axonal spines, respectively) activated electromechanically by the unit-CPU. For morphological changes, by providing the edges of each membrane section with an electromechanical hinged connection (i.e., a means of changing the angle of inclination between immediately adjacent sections), the emergent morphology can be controllably varied. This eventually developed to consist of an internal compartment designed so as to detach a given membrane section, move it down into the internal compartment of the neuronal soma or terminal, transport it along a track that stores alternative membrane sections stacked face-to-face (to compensate for limited space), and subsequently replaces it with a membrane section containing an alternate functional component (e.g., ion pump, ion channel, [voltage-gated or ligand-gated], etc.) embedded therein. Note that this approach was also conceived of as an alternative to retractable axons/dendrites and axonal/dendritic spines, by attaching additional membrane sections with a very steep angle of inclination (or a lesser inclination with a greater quantity of segments) and thereby creating an emergent section of artificial membrane that extends out from the biological membrane in the same way as axons and dendrites.

However, this approach was eventually supplemented by one that necessitates less technological infrastructure (i.e., that was simpler and thus more economical and realizable). If the size of the integral-membrane components is small enough (preferably smaller than their biological analogues), then differential activation of components or membrane sections would achieve the same effect as changing the organization or type of integral-membrane components, effectively eliminating the need at actually interchange membrane sections at all.

Active Neuronal Modulation and Modification

The technological and methodological infrastructure used to facilitate neural plasticity can also be used for active modification and modulation of neural behavior (and the emergent functionality determined by local neuronal behavior) towards the aim of mental augmentation and modification. Potential uses already discussed include mental amplification (increasing or augmenting existing functional modalities—i.e., intelligence, emotion, morality), or mental augmentation (the creation of categorically new functional and experiential modalities). While the distinction between modification and modulation isn’t definitive, a useful way of differentiating them is to consider modification as morphological changes creating new functional modalities, and to consider modulation as actively varying the operation of existing structures/processes through not morphological change but rather changes to the operation of integral-membrane components or the properties of the local environment (e.g., increasing local ionic concentrations).

Modulation: A Less Discontinuous Alternative to Morphological Modification

The use of modulation to achieve the effective results of morphological changes seemed like a hypothetically less discontinuous alternative to morphological changes (and thus as having a hypothetically greater probability of achieving subjective-continuity). I’m more dubious in regards to the validity of this approach now, because the emergent functionality (normatively determined by morphological features) is still changed in an effectively equivalent manner.

The Eventual Replacement of Neural Ionic Solutions with Direct Electric Fields

Upon full gradual replacement of the CNS with physical-functionalist equivalents, the preferred embodiment consisted of replacing the ionic solutions with electric fields that preserve the electric potential instantiated by the difference in ionic concentrations on the respective sides of the membrane. Such electric fields can be generated directly, without recourse to electrochemicals for manifesting them. In such a case the integral-membrane components would be replaced by a means of generating and maintaining a static and/or dynamic electric field on either side of the membrane, or even merely of generating an electrical potential (i.e., voltage—a broader category encompassing electric fields) with solid-state electronics.

This procedure would allow a fraction of the speedups (that is, increased rate of subjective perception of time, which extends to speed of thought) resulting from emulatory (i.e., strictly computational) replication-methods by no longer being limited to the rate of passive ionic diffusion—now instead being limited to the propagation velocity of electric or electromagnetic fields.

Wireless Synapses

If we replace the physical synaptic connections the NRU uses to communicate (with both existing biological neurons and with other NRUs) with a wireless means of synaptic-transmission, we can preserve the same functionality (insofar as it is determined by synaptic connectivity) while allowing any NRU to communicate with any other NRU or biological neuron in the brain at potentially equal speed. First we need a way of converting the output of an NRU or biological neuron into information that can be transmitted wirelessly. For cyber-physicalist-functionalist NRUs, regardless of their sub-class, this requires no new technological infrastructure because they already deal with 2nd-order (i.e., not structurally or directly embodied) information; informational-functional NRU deals solely in terms of this type of information, and the cyber-physical-systems sub-class of the physicalist-functionalist NRUs deal with this kind of information in the intermediary stage between sensors and actuators—and consequently, converting what would have been a sequence of electromechanical actuations into information isn’t a problem. Only the passive-physicalist-functionalist NRU class requires additional technological infrastructure to accomplish this, because they don’t already use computational operational-modalities for their normative operation, whereas the other NRU classes do.

We dispose receivers within the range of every neuron (or alternatively NRU) in the brain, connected to actuators – the precise composition of which depends on the operational modality of the receiving biological neuron or NRU. The receiver translates incoming information into physical actuations (e.g., the release of chemical stores), thereby instantiating that informational output in physical terms. For biological neurons, the receiver’s actuators would consist of a means of electrically stimulating the neuron and releasable chemical stores of neurotransmitters (or ionic concentrations as an alternate means of electrical stimulation via the manipulation of local ionic concentrations). For informational-functionalist NRUs, the information is already in a form it can accept; it can simply integrate that information into its extant model. For cyber-physicalist-NRUs, the unit’s CPU merely needs to be able to translate that information into the sequence in which it must electromechanically actuate its artificial ion-channels. For the passive-physicalist (i.e., having no computational hardware devoted to operating individual components at all, operating according to physical feedback between components alone) NRUs, our only option appears to be translating received information into the manipulation of the local environment to vicariously affect the operation of the NRU (e.g., increasing electric potential through manipulations of local ionic concentrations, or increasing the rate of diffusion via applied electric fields to attract ions and thus achieve the same effect as a steeper electrochemical gradient or potential-difference).

The technological and methodological infrastructure for this is very similar to that used for the “integrational NRUs”, which allows a given NRU-class to communicate with either existing biological neurons or NRUs of an alternate class.

Integrating New Neural Nets Without Functional Distortion of Existing Regions

The use of artificial neural networks (which here will designate NRU-networks that do not replicate any existing biological neurons, rather than the normative Artificial Neuron Networks mentioned in the first and second parts of this essay), rather than normative neural prosthetics and BCI, was the preferred method of cognitive augmentation (creation of categorically new functional/experiential modalities) and cognitive amplification (the extension of existing functional/experiential modalities). Due to functioning according to the same operational modality as existing neurons (whether biological or artificial-replacements), they can become a continuous part of our “selves”, whereas normative neural prosthetics and BCI are comparatively less likely to be capable of becoming an integral part of our experiential continuum (or subjective sense of self) due to their significant operational dissimilarity in relation to biological neural networks.

A given artificial neural network can be integrated with existing biological networks in a few ways. One is interior integration, wherein the new neural network is integrated so as to be “inter-threaded”, in which a given artificial-neuron is placed among one or multiple existing networks. The networks are integrated and connected on a very local level. In “anterior” integration, the new network would be integrated in a way comparable to the connection between separate cortical columns, with the majority of integration happening at the peripherals of each respective network or cluster.

If the interior integration approach is used then the functionality of the region may be distorted or negated by virtue of the fact that neurons that once took a certain amount of time to communicate now take comparatively longer due to the distance between them having been increased to compensate for the extra space necessitated by the integration of the new artificial neurons. Thus in order to negate these problematizing aspects, a means of increasing the speed of communication (determined by both [a] the rate of diffusion across the synaptic junction and [b] the rate of diffusion across the neuronal membrane, which in most cases is synonymous with the propagation velocity in the membrane – the exception being myelinated axons, wherein a given action potential “jumps” from node of Ranvier to node of Ranvier; in these cases propagation velocity is determined by the thickness and length of the myelinated sections) must be employed.

My original solution was the use of an artificial membrane morphologically modeled on a myelinated axon that possesses very high capacitance (and thus high propagation velocity), combined with increasing the capacitance of the existing axon or dendrite of the biological neuron. The cumulative capacitance of both is increased in proportion to how far apart they are moved. In this way, the propagation velocity of the existing neuron and the connector-terminal are increased to allow the existing biological neurons to communicate as fast as they would have prior to the addition of the artificial neural network. This solution was eventually supplemented by the wireless means of synaptic transmission described above, which allows any neuron to communicate with any other neuron at equal speed.

Gradually Assigning Operational Control of a Physical NRU to a Virtual NRU

This approach allows us to apply the single-neuron gradual replacement facilitated by the physical-functionalist NRU to the informational-functionalist (physically embodied) NRU. A given section of artificial membrane and its integral membrane components are modeled. When this model is functioning in parallel (i.e., synchronization of operative states) with its corresponding membrane section, the normative operational routines of that artificial membrane section (usually controlled by the unit’s CPU and its programming) are subsequently taken over by the computational model—i.e., the physical operation of the artificial membrane section is implemented according to and in correspondence with the operative states of the model. This is done iteratively, with the informationalist-functionalist NRU progressively controlling more and more sections of the membrane until the physical operation of the whole physical-functionalist NRU is controlled by the informational operative states of the informationalist-functionalist NRU. While this concept sprang originally from the approach of using multiple gradual-replacement phases (with a class of model assigned to each phase, wherein each is more dissimilar to the original than the preceding phase, thereby increasing the cumulative degree of graduality), I now see it as a way of facilitating sub-neuron gradual replacement in computational NRUs. Also note that this approach can be used to go from existing biological membrane-sections to a computational NRU, without a physical-functionalist intermediary stage. This, however, is comparatively more complex because the physical-functionalist NRU already has a means of modulating its operative states, whereas the biological neuron does not. In such a case the section of lipid bilayer membrane would presumably have to be operationally isolated from adjacent sections of membrane, using a system of chemical inventories (of either highly concentrated ionic solution or neurotransmitters, depending on the area of membrane) to produce electrochemical output and chemical sensors to accept the electrochemical input from adjacent sections (i.e., a means of detecting depolarization and hyperpolarization). Thus to facilitate an action potential, for example, the chemical sensors would detect depolarization, the computational NRU would then model the influx of ions through the section of membrane it is replacing and subsequently translate the effective results impinging upon the opposite side to that opposite edge via either the release of neurotransmitters or the manipulation of local ionic concentrations so as to generate the required depolarization at the adjacent section of biological membrane.

Integrational NRU

This consisted of a unit facilitating connection between emulatory (i.e., informational-functionalist) units and existing biological neurons. The output of the emulatory units is converted into chemical and electrical output at the locations where the emulatory NRU makes synaptic connection with other biological neurons, facilitated through electric stimulation or the release of chemical inventories for the increase of ionic concentrations and the release of neurotransmitters, respectively. The input of existing biological neurons making synaptic connections with the emulatory NRU is read, likewise, by chemical and electrical sensors and is converted into informational input that corresponds to the operational modality of the informationalist-functionalist NRU classes.

Solutions to Scale

If we needed NEMS or something below the scale of the present state of MEMS for the technological infrastructure of either (a) the electromechanical systems replicating a given section of neuronal membrane, or (b) the systems used to construct and/or integrate the sections, or those used to remove or otherwise operationally isolate the existing section of lipid bilayer membrane being replaced from adjacent sections, a postulated solution consisted of taking the difference in length between the artificial membrane section and the existing bilipid section (which difference is determined by how small we can construct functionally operative artificial ion-channels) and incorporating this as added curvature in the artificial membrane-section such that its edges converge upon or superpose with the edges of the space left by the removal the lipid bilayer membrane-section. We would also need to increase the propagation velocity (typically determined by the rate of ionic influx, which in turn is typically determined by the concentration gradient or difference in the ionic concentrations on the respective sides of the membrane) such that the action potential reaches the opposite end of the replacement section at the same time that it would normally have via the lipid bilayer membrane. This could be accomplished directly by the application of electric fields with a charge opposite that of the ions (which would attract them, thus increasing the rate of diffusion), by increasing the number of open channels or the diameter of existing channels, or simply by increasing the concentration gradient through local manipulation of extracellular and/or intracellular ionic concentration—e.g., through concentrated electrolyte stores of the relevant ion that can be released to increase the local ionic concentration.

If the degree of miniaturization is so low as to make this approach untenable (e.g., increasing curvature still doesn’t allow successful integration) then a hypothesized alternative approach was to increase the overall space between adjacent neurons, integrate the NRU, and replace normative connection with chemical inventories (of either ionic compound or neurotransmitter) released at the site of existing connection, and having the NRU (or NRU sub-section—i.e., artificial membrane section) wirelessly control the release of such chemical inventories according to its operative states.

The next chapter describes (a) possible physical bases for subjective-continuity through a gradual-uploading procedure and (b) possible design requirements for in vivo brain-scanning and for systems to construct and integrate the prosthetic neurons with the existing biological brain.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Project Avatar (2011). Retrieved February 28, 2013 from http://2045.com/tech2/

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

Life Extension and Risk Aversion – Video by G. Stolyarov II

Life Extension and Risk Aversion – Video by G. Stolyarov II

Mr. Stolyarov explains that living longer renders people more hesitant to risk their lives, for the simple reason that they have many more years to lose than their less technologically endowed ancestors.

References
– “Life Extension and Risk Aversion” – Essay by G. Stolyarov II
– “Life expectancy variation over time” – Wikipedia
Life Expectancy Graphs – University of Oregon
History of Life Expectancy – WorldLifeExpectancy.com
– “Steven Pinker” – Wikipedia
– “The Better Angels of Our Nature” – Wikipedia
– “FBI Statistics Show Major Reduction in Violent Crime Rates” – WanttoKnow.info
– “List of motor vehicle deaths in U.S. by year” – Wikipedia
– “Prevalence of tobacco consumption” – Wikipedia
– “Human error accounts for 90% of road accidents” – Olivia Olarte – AlertDriving.com
– “Autonomous car” – Wikipedia
– “Iterative Learning versus the Student-Debt Trap” – Essay and Video by G. Stolyarov II

Life Extension and Risk Aversion – Article by G. Stolyarov II

Life Extension and Risk Aversion – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
April 28, 2013
******************************

A major benefit of longer lifespans is the cultivation of a wide array of virtues. Prudence and forethought are among the salutary attributes that the lengthening of human life expectancies – hopefully to the point of eliminating any fixed upper bound – would bring about.

Living longer renders people more hesitant to risk their lives, for the simple reason that they have many more years to lose than their less technologically endowed ancestors.

This is not science fiction or mere speculation; we see it already. In the Western world, average life expectancies increased from the twenties and thirties in the Middle Ages to the early thirties circa 1800 to the late forties circa 1900 to the late seventies and early eighties in our time. As Steven Pinker writes in his magnum opus, The Better Angels of Our Nature, the overall trend in the Western world (in spite of temporary spikes of conflict, such as the World Wars) has been toward greater peace and increased reluctance of individuals to throw their lives away in armed struggles for geopolitical gain. Long-term declines in crime rates, automobile fatalities, and even smoking have accompanied (and contributed to) rises in life expectancy. Economic growth and improvements in the technologies of production help as well. If a person has not only life but material comfort to lose, this amplifies the reluctance to undertake physical risks even further.

Yet, with today’s finite lifespans, most individuals still find a non-negligible degree of life-threatening risk in their day-to-day endeavors to be an unavoidable necessity. Most people in the United States need to drive automobiles to get to work – in spite of the risk of sharing the road with incompetent, intoxicated, or intimidating other drivers. Over 30,000 people perish every year in the United States alone as a result of that decision. While the probability for any given individual of dying in an automobile accident is around 11 in 100,000 (0.011%) per year, this is still unacceptably high. How would a person with several centuries, several millennia, or all time ahead of him feel about this probability? Over a very long time, the probability of not encountering such a relatively rare event asymptotically approaches zero. For instance, at today’s rate of US automobile fatalities, a person living 10000 years would have a probability of (1 – 0.00011)^10000 = 0.3329 – a mere 33.29% likelihood – of not dying in an automobile accident! If you knew that a problem in this world had a two-thirds probability of killing you eventually, would you not want to do something about it?

Of course, the probabilities of tragic events are not fixed or immutable. They can be greatly affected by individual choices – our first line of defense against life-threatening risks. Well-known risk-management strategies for reducing the likelihood of any damaging event include (1) avoidance (not pursuing the activity that could cause the loss – e.g., not driving on a rugged mountain road – but this is not an option in many cases), (2) loss prevention (undertaking measures, such as driving defensively, that allow one to engage in the activity while lowering the likelihood of catastrophic failure), and (3) loss reduction (undertaking measures, such as wearing seat belts or driving in safer vehicles, that would lower the amount of harm in the event of a damaging incident). Individual choices, of course, cannot prevent all harms. The more fundamental defense against life-threatening accidents is technology. Driving itself could be made safer by replacing human operators, whose poor decisions cause over 90% of all accidents, with autonomous vehicles – early versions of which are currently being tested by multiple companies worldwide and have not caused a single accident to date when not manually driven.

Today, forward-thinking technology companies such as Google are driving the autonomous-vehicle revolution ahead. There is, unfortunately, no large clamor by the public for these life-saving cars yet. However, as life expectancies lengthen, that clamor will surely be heard. When we live for centuries and then for millennia, we will view as barbarous the age when people were expected to take frightening risks with their irreplaceable existences, just to make it to the office every morning. We will see the attempt to manually operate a vehicle as a foolish and reckless gamble with one’s life – unless one is a professional stunt driver who would earn millions in whatever future currency will then exist.

But living longer will accomplish more than just a changed perspective toward the risks presently within our awareness. Because of our expanded scope of personal interest, we will begin to be increasingly aware of catastrophes that occur at much longer intervals than human lifespans have occupied to date. The impacts of major earthquakes and volcano eruptions, recurring ice ages, meteor strikes, and continental drift will begin to become everyday concerns, with far more individuals devoting their time, money, and attention to developing technological solutions to these hitherto larger-than-human-scale catastrophes. With even more radically lengthened lifespans, humans will be motivated to direct their efforts, including the full thrust of scientific research, toward overcoming the demise of entire solar systems. In the meantime, there would be less tolerance for any pollution that could undermine life expectancies or the long-term sustainability of a technological infrastructure (which, of course, would be necessary for life-extension treatments to continue keeping senescence at bay). Thus, a society of radical life extension will embrace market-generated environmentally friendly technologies, including cleaner energy sources, reuse of raw materials (for instance, as base matter for 3D printing and nanoscale fabrication), and efficient targeting of resources toward their intended purposes (e.g., avoidance of wasted water in sprinkler systems or wasted paper in the office).

When life is long and good, humans move up on the hierarchy of needs. Not starving today ceases to be a worry, as does not getting murdered tomorrow. The true creativity of human faculties can then be directed toward addressing the grand, far more interesting and technologically demanding, challenges of our existence on this Earth.

Some might worry that increased aversion to physical risk would dampen human creativity and discourage people from undertaking the kinds of ambitious and audacious projects that are needed for technological breakthroughs to emerge and spread. However, aversion to physical risk does not entail aversion to other kinds of risk – social, economic, or political. Indeed, social rejection or financial ruin are not nearly as damaging to a person with millennia ahead of him as they are to a person with just a few decades of life left. A person who tries to run an innovative business and fails can spend a few decades earning back the capital needed to start again. Today, few entrepreneurs have that second chance. Most do not even have a first chance, as the initial capital needed for a groundbreaking enterprise is often colossal. Promising ideas and a meritorious character do not guarantee one a wealthy birth, and thus even the best innovators must often start with borrowed funds – a situation that gives them little room to explore the possibilities and amplifies their ruin if they fail.  The long-lived entrepreneurs in a world of indefinite life extension would tend to earn their own money upfront and gradually go into business for themselves as they obtain the personal resources to do so. This kind of steady, sustainable entry into a line of work allows for a multitude of iterations and experiments that maximize the probability of a breakthrough.

Alongside the direct benefits of living longer and the indirect benefits of the virtues cultivated thereby, indefinite life extension will also produce less stressful lives for most. The less probability there is of dying or becoming seriously injured or ill, the easier one can breathe as one pursues day-to-day endeavors of self-improvement, enjoyment, and productive work. The less likely a failure is to rob one of opportunities forever, the more likely humans will be to pursue the method of iterative learning and to discover new insights and improved techniques through a beneficent trial-and-error process, whose worst downsides will have been curtailed through technology and ethics. Life extension will lead us to avoid and eliminate the risks that should not exist, while enabling us to safely pursue the risks that could benefit us if approached properly.

Liberty Through Long Life – Video by G. Stolyarov II

Liberty Through Long Life – Video by G. Stolyarov II

To maximize their hopes of personally experiencing an amount of personal freedom even approaching that of the libertarian ideal, all libertarians should support radical life extension.

References
– “Liberty Through Long Life” – Essay by G. Stolyarov II –
Resources on Indefinite Life Extension (RILE)
– “Libertarian Life-Extension Reforms” – Video Series – G. Stolyarov II –
– “Massive open online course” – Wikipedia
Mozilla’s Open Badges
– “Open Badges and Proficiency-Based Education: A Path to a New Age of Enlightenment” – Essay by G. Stolyarov II
– “Deep Space Industries” – Wikipedia
– “Planetary Resources” – Wikipedia
The Seasteading Institute
– “Seasteading’s Potential and Challenges: An Overview” — Essay by G. Stolyarov II
– “Seasteading’s Potential and Challenges: An Overview” — Video by G. Stolyarov II
– “Bitcoin” – Wikipedia
– “Benjamin Franklin and the Early Scientific Vision – 1780” – Foundation for Infinite Survival
– “Revisiting the proto-transhumanists: Diderot and Condorcet” – George Dvorsky – Sentient Developments
– “Marquis de Condorcet, Enlightenment proto-transhumanist” – George Dvorsky – IEET
SENS Research Foundation
– “Ray Kurzweil” – Wikipedia

Liberty Through Long Life – Article by G. Stolyarov II

Liberty Through Long Life – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
April 14, 2013
******************************

            It is commonly recognized among libertarians (and some others) that the freedom of individuals to innovate will result in a more rapid rate of technological progress. In “Six Libertarian Reforms to Accelerate Life Extension” I described six liberty-enhancing political changes that would more swiftly bring about the arrival of indefinite human longevity. But, as is less often understood, the converse of this truth also holds. Technological progress in general improves the prospects for liberty and its actual exercise in everyday life. One of the most promising keys to achieving liberty in our lifetimes is to live longer so that we can personally witness and benefit from accelerating technological progress.

            Consider, for example, what the Internet has achieved with respect to expanding the practical exercise of individual freedom of speech. It has become virtually impossible for regimes, including their nominally private “gatekeepers” of information in the mass media and established publishing houses, to control the dissemination of information and the expression of individual opinion. In prior eras, even in countries where freedom of speech was the law of the land, affiliations of the media, by which speech was disseminated, with the ruling elite would serve as a practical barrier for the discussion of views that were deemed particularly threatening to the status quo. In the United States, effective dissent from the established two-party political system was difficult to maintain in the era of the “big three” television channels and a print and broadcast media industry tightly controlled by a few politically connected conglomerates. Now expressing an unpopular opinion is easier and less expensive than ever – as is voting with one’s money for an ever-expanding array of products and services online. The ability of individuals to videotape public events and the behavior of law-enforcement officers has similarly served as a check on abusive behavior by those in power. Emerging online education and credentialing options, such as massive open online courses and Mozilla’s Open Badges, have the power to motivate a widespread self-driven enlightenment which would bring about an increased appreciation for rational thinking and individual autonomy.

            Many other technological advances are on the horizon. The private space race is in full swing, with companies such as SpaceX, Virgin Galactic, Deep Space Industries, and Planetary Resources embarking on ever more ambitious projects. Eventually, these pioneering efforts may enable humans to colonize new planets and build permanent habitats in space, expanding jurisdictional competition and opening new frontiers where free societies could be established. Seasteading, an idea only five years in development, is a concept for building modular ocean platforms where political experimentation could occur and, through competitive pressure, catalyze liberty-friendly innovations on land. (I outlined the potential and the challenges of this approach in an earlier essay.) The coming decades could see the emergence of actual seasteads of increasing sophistication, safety, and political autonomy. Another great potential for increasing liberty comes from the emerging digital-currency movement, of which Bitcoin has been the most prominent exemplar to date. While Bitcoin has been plagued with recent extreme exchange-rate volatility and vulnerability to manipulation and theft by criminal hackers, it can still provide some refuge from the damaging effects of inflationary and redistributive central-bank monetary policy. With enough time and enough development of the appropriate technological infrastructure, either Bitcoin or one of its successor currencies might be able to obtain sufficient stability and reliability to become a widespread apolitical medium of exchange.

            But there is a common requirement for one to enjoy all of these potential breakthroughs, along with many others that may be wholly impossible to anticipate: one has to remain alive for a long time. The longer one remains alive, the greater the probability that one’s personal sphere of liberty would be expanded by these innovations. Living longer can also buy one time for libertarian arguments to gain clout in the political sphere and in broader public opinion. Technological progress and pro-liberty activism can reinforce one another in a virtuous cycle.

            To maximize their hopes of personally experiencing an amount of personal freedom even approaching that of the libertarian ideal, all libertarians should support radical life extension. This sought-after goal of some ancient philosophers, medieval alchemists, Enlightenment thinkers (notably Franklin, Diderot, and Condorcet), and medical researchers from the past two centuries, is finally within reach of many alive today. Biogerontologist Aubrey de Grey of the SENS Research Foundation gives humankind a 50 percent likelihood of reaching “longevity escape velocity” – a condition where increases in life expectancy outpace the rate of human senescence – within 25 years. Inventor, futurist, and artificial-intelligence researcher Ray Kurzweil predicts a radical increase in life expectancy in the 2020s, made possible by advances in biotechnology and nanotechnology, aided by exponentially growing computing power. But, like de Grey and perhaps somewhat unlike Kurzweil, I hold the view that these advances are not inevitable; they rely on deliberate, sustained, and well-funded efforts to achieve them. They rely on support by the general public to facilitate donations, positive publicity, and a lack of political obstacles placed in their way. All libertarians should become familiar with both the technical feasibility and the philosophical desirability of a dramatic, hopefully indefinite, extension of human life expectancies. My compilation of Resources on Indefinite Life Extension (RILE) is a good starting point for studying this subject by engaging with a wide variety of sources, perspectives, and ongoing developments in science, technology, and activism.

            We have only this one life to live. If we fail to accomplish our most cherished goals and our irreplaceable individual universes disappear into oblivion, then, to us, it will be as if those goals were never accomplished. If we want liberty, we should strive to attain it in our lifetimes. We should therefore want those lifetimes to be lengthened beyond any set limit, not just for the sake of experiencing a far more complete liberty, but also for the sake of life itself and all of the opportunities it opens before us.

Bitcoin for Beginners – Article by Jeffrey A. Tucker

Bitcoin for Beginners – Article by Jeffrey A. Tucker

The New Renaissance Hat
Jeffrey A. Tucker
April 2, 2013
******************************

Understanding Bitcoin requires that we understand the limits of our ability to imagine the future that the market can create for us.

Thirty years ago, for example, if someone had said that electronic text—digits flying through the air and landing in personalized inboxes owned by us all that we check at will at any time of the day or night—would eventually displace first class mail, you might have said it was impossible.

After all, not even the Jetsons cartoon imagined email. Elroy brought notes home from his teacher on pieces of paper. Still, email has largely displaced first-class mail, just as texting, social networking, private messaging, and even digital vmail via voice-over-Internet are replacing the traditional telephone.

It turns out that the future is really hard to imagine, especially when entrepreneurs specialize in surprising us with innovations. The markets are always outsmarting even the most wild-eyed dreamers, and they are certainly smarter than the intellectual who keeps saying: such and such cannot happen.

It’s the same today. What if I suggested that digital money could eventually come to replace government paper money? Heaven knows we need a replacement.

Solving Problems a Byte at a Time

Money started in modern times as gold and silver, and it was controlled by its owners and users. Then the politicians got hold of it—a controlling interest in half of every transaction—and look what they did. Today money is rooted in nothing at all and its value is subject to the whims of central planners, politicians, and monetary bureaucrats. This system is not very modern when we consider a world in which the market is driving innovations in other aspects of our daily lives.

Maybe it was just a matter of time. The practicality is impossible to deny: Gamers needed tokens they could trade. Digital real estate needed to be bought and sold. Money was also becoming more and more notional, with wire transfers, bank computer systems, and card networks serving to move “money” around. The whole world was gradually migrating to the digital sphere, but conventional money was attached to the ground, to vaults owned or controlled by governments.

The geeks went to work on it in the 1990s and developed a number of prototypes—Ecash, bit gold, RPOW, b-money—but they all faltered for the same reason: their supply could not be limited and no one could figure out how to make them impossible to double and triple spend. Normally, reproducibility is a wonderful thing. You can send me an image and still keep it. You can send me a song and not lose control of yours. The Internet made possible infinite copying, which is a great thing for media and texts and—with 3-D printing—even objects. But reproducibility is not a feature that benefits a medium of exchange.

After all, a currency is useless unless it is scarce and its replication is carefully controlled. Think of the gold standard. There is a fixed amount of gold in the world, and it enters into economic life only through hard work and real expenditure. Gold has to be mined. All gold is interchangeable with all other gold, but when I own an ounce, you can’t own it at the same time. How can such a system be replicated in the digital sphere? How can you assign titles to a fungible digital good and makes sure that these titles are absolutely sticky to the property in question?

Follow the Money

Finally it happened. In 2008, a person called “Satoshi Nakamoto” created Bitcoin. He wasn’t the first to solve the problem of double spending. A currency called e-gold did that, but the flaw was that there was a central entity in charge that users had to trust. Bitcoin removed this central point of failure, enabling miners themselves constantly to validate the transaction record. He had each user download the full ledger of all existing Bitcoins so that each could be checked for its title and not used more than once at the same time. With his system, every coin had an owner, and the system could not be gamed.

Further, Nakamoto built in a system of mining that attempts to replicate the experience of the gold standard. The math equations you have to solve get harder over time. The early creators had it easy, just like the early miners of gold could pan it out of the river, though later they had to dig into the mountain. Nakamoto put a limit on the number of coins that can be mined (21 million by 2140). (A new coin is currently mined every 20 seconds or so, and a transaction occurs every second.)

He made his code completely open-source and available to all so that it could be trusted. And the payment system used the most advanced form of encryption, with public keys visible to all and a scrambling system that makes its connection to the private key impossible to discover.

No one would be in charge of the system; everyone would be in charge of the system. This is what it means to be open source, and it’s the same dynamic that has made WordPress a powerhouse in the software community. There would be no need for an Audit Bitcoin movement. Trust, anonymity, speed, strict property rights, and the possibility that applications could be built on top of the infrastructure made it perfect.

Bitcoin went live on November 1, 2008. To really appreciate why this matters, consider the times. The entire political and financial establishment was in full-scale panic meltdown. The real estate markets had collapsed, pulling down the balance sheets of the major banks. The investment banks were unloading mortgage-backed securities at an unprecedented pace. Boats delivering goods couldn’t leave shore because they could find no backers for their insurance bonds. For a moment, it seemed like the world was ending. The Republicans held the White House, but the unthinkable still happened: Government and the central banks decided to attempt a full-scale rescue of the whole system, spending and creating trillions in new paper tickets to fill bank vaults.

Clearly government paper was failing. A digital alternative had to exist. But what gave Bitcoin its value? There were several factors. It was not fixed to any existing currency, so it could float according to human valuation. It was made from real stuff: the very 1s and 0s that were driving forward the global market economy. And while 1s and 0s can be reproduced unto infinity, the new coins could not, thanks to a system in which the coin and its public key were strictly controlled and the ledger updated for every transaction. Its soundness could be checked constantly through instantaneous conversion to other currencies as well as to goods and services. The model seemed impenetrable, the first digital currency that really addressed all the problems that had doomed previous attempts.

A Bitcoin of One’s Own

Let’s fast forward in time to March 2013. I had become the proud owner of my first Bitcoin. My wallet lived on my smartphone. Only three years ago, some wonderful applications had already developed around the currency unit. Although I’m a bit techy, I’m not a rocket scientist and I’m quite certain that I would have been out of my league. But this is how digital institutions develop to become ever more user friendly. At the same event at which I became a Bitcoin owner, I also used a Bitcoin ATM. I put in the green stuff, held my digital wallet up to the scanner, and then I felt the buzz on my smartphone. Physical became digital. Beautiful.

But still I wondered what exactly I could do with these things. That’s when the consumer world of Bitcoin products appeared before me. We aren’t just talking about the Silk Road—a website that became notorious for enabling the easy, anonymous buying and selling of drugs. There are Bitcoin stores everywhere. And there are services in which you can buy from any website with a Bitcoin interface. There was growing talk of Bitcoin futures markets. Some companies were rumored to be going public with Bitcoins, and thereby bypassing the whole of the Securities and Exchange Commission. The implications are mind-blowing.

Sacred Pliers

Still, I’m a tactile kind of guy. I need to experience things. So I went to one of these sites. I brought the first product I saw (why, I do not know). It was a pair of pliers for crimping electric cables. I put in my shipping address and up came a note that said it was time to pay. This was the moment I had been waiting for. A QR code—that funny square design that looks like a 3-D bar code—popped up onscreen. I held up my “wallet” and scanned. In less than 2 seconds, the deed was done. It was easier than Amazon’s one-click ordering system. My heart raced. I jumped out of my chair and did a quick song and dance around the room. Somehow I had seen it thoroughly for the first time: this is the future.

The pliers arrived two days later, and even though I have no use for them, I still treasure them.

Bitcoin had already taken off when the surprising Cyprus crisis hit in a big way. The government was talking about seizing bank deposits as a way of bailing out the whole system. During this period, Bitcoin essentially doubled in value. Press reports said that people were pulling out government currency and converting it, not only in Cyprus but also in Spain and Italy and elsewhere. The price of Bitcoin in terms of dollars soared. Another way to put this is that the price of goods and services in terms of Bitcoin was going down. Yes, this is the much-dreaded system that mainstream economists decry as “deflation.” The famed Keynesian Paul Krugman has even gone so far as to say that the worst thing about Bitcoin is that people hoard them instead of spending them, thereby replicating the feature of the gold standard that he hates the most! He might as well have given a ringing endorsement, as far as I’m concerned.

Obsession and Resentment

My own experience with Bitcoin during this time intensified. I began to call friends on Skype and scan their QR codes and trade currencies. I began to rope other people into the obsession based on my experience: you have to own to believe. After one full day of buying, selling, and using Bitcoins, I had the strange experience of resenting that I had to pay a cab fare in plain old U.S. dollars.

How do you obtain Bitcoins? This process can be a bit tricky. You can look up localbitcoins.com and find a local person to meet you to trade cash for Bitcoins. Usually, this exchange takes place at high premiums of anywhere from 10 percent to 50 percent depending on how competitive the local market is. It is understandable why people are reluctant to do this, no matter how safe it is. There is just something that seems sketchy about meeting a stranger in an all-night cafe to do some unusual digital currency exchange.

A more conventional route is to go to one of many online sellers and link up your bank account and buy. This process can take a few days. And then when you set out to transfer the funds, you might be surprised at the limits in the market that exist these days. Sites are rationing Bitcoin selling based on availability, just given the high demand. It could be 10 days or more to go from non-owner to real owner. But once you have them, you are off to the races. Sending and receiving money has never been easier.

Doubts?

As of this writing, a Bitcoin is trading for $88.249.  Just three years ago, it hovered at 0.14 cents. Many people look at the current market and think, surely this is a speculative bubble. That could be true, but it might not be. People are exchanging an unstable, fiat paper for something with a real title that cannot be duplicated. Everyone knows precisely how many Bitcoins exist at any time. Anyone can observe the transactions taking place in real time. A Bitcoin’s price can go up and down, and that’s fine, but there is no real speculation going on here that is endogenous to the Bitcoin market itself.

Is it a pyramid scheme? The defining mark of a pyramid scheme is that more than one person has an equal claim on the same money or good. This is physically impossible with Bitcoin. The way the program is set up, it is a strict property rights regime with no exceptions. In fact, in early March, there was a brief hiccup in the system when some new coins were approved by one group of developers but not approved by another. A “fork” appeared in the system. The price began to fall. Developers worked fast to resolve the dispute and eventually the system—and the price—returned to normal. This is the advantage of the open-source system.

But what about the vague sense some people have that a handful of coders cannot, on their own, cause a new currency to come in existence? Well, if you look back at what Austrian monetary theorist Carl Menger says, he points out that a similar process is precisely how gold became money. Every new currency is not at first used by everyone. It is at first used only “by the most discerning and most capable economizing individuals.” Their successful behaviors are then emulated by others. In other words, the emergence of money involves entrepreneurship—that is, being alert to opportunities to discover and provide something new.

Leviathan Leers

But what about a government crackdown? No doubt that attempt will be made. Already, some national government agencies are expressing some degree of annoyance at what could be. But governments haven’t been able to control the cash economy. It would be infinitely more difficult to control a virtual currency with no central bank, with encryption, and with millions of users per day. Controlling that would be unthinkable.

There was a time when the idea that ebooks would replace physical books was an absurd notion. When I first took a look at the early generation of ereaders, I laughed and scoffed. It will never happen. Now I find myself looking for a home for my physical books and loading up on ebooks by the hundreds. Such is the way markets surprise us. Technology without central planners makes dreams come true.

It’s possible that Bitcoin will flop. Maybe it is just the first generation. Maybe thousands of people will lose their shirts in this first go-round. But is the digitization of money coming? Absolutely. Will there always be skeptics out there? Absolutely. But in this case, they are not in charge. Markets will do what they do, building the future whether we approve or understand it fully or not. The future will not be stopped.

Jeffrey Tucker is executive editor and publisher at Laissez Faire Books

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.