Browsed by
Category: Science

To Bee or Not to Bee? – Article by Paul Driessen

To Bee or Not to Bee? – Article by Paul Driessen

The New Renaissance Hat
Paul Driessen
September 9, 2013
******************************

Activist groups continue to promote scary stories that honeybees are rapidly disappearing, dying off at “mysteriously high rates,” potentially affecting one-third of our food crops and causing global food shortages. Time Magazine says readers need to contemplate “a world without bees,” while other “mainstream media” articles have sported similar headlines.

The Pesticide Action Network and NRDC are leading campaigns that claim insecticides, especially neonicotinoids, are at least “one of the key factors,” if not the principle or sole reason for bee die-offs.

Thankfully, the facts tell a different story – two stories, actually. First, most bee populations and most managed hives are doing fine, despite periodic mass mortalities that date back over a thousand years. Second, where significant depopulations have occurred, many suspects have been identified, but none has yet been proven guilty, although researchers are closing in on several of them.

Major bee die-offs have been reported as far back as 950, 992 and 1443 AD in Ireland. 1869 brought the first recorded case of what we now call “colony collapse disorder,” in which hives full of honey are suddenly abandoned by their bees. More cases of CCD or “disappearing disease” have been reported in recent decades, and a study by bee researchers Robyn Underwood and Dennis vanEngelsdorp chronicles more than 25 significant bee die-offs between 1868 and 2003. However, contrary to activist campaigns and various news stories, both wild and managed bee populations are stable or growing worldwide.

Beekeeper-managed honeybees, of course, merit the most attention, since they pollinate many important food crops, including almonds, fruits and vegetables. (Wheat, rice and corn, on the other hand, do not depend at all on animal pollination.) The number of managed honeybee hives has increased some 45% globally since 1961, Marcelo Aizen and Lawrence Harder reported in Current Biology – even though pesticide overuse has decimated China’s bee populations.

Even in Western Europe, bee populations are gradually but steadily increasing. The trends are similar in other regions around the world, and much of the decline in overall European bee populations is due to a massive drop in managed honeybee hives in Eastern Europe, after subsidies ended with the collapse of the Soviet Union. In fact, since neonicotinoid pesticides began enjoying widespread use in the 1990s, overall bee declines appear to be leveling off or have even diminished.

Nevertheless, in response to pressure campaigns, the EU banned neonics – an action that could well make matters worse, as farmers will be forced to use older, less effective, more bee-lethal insecticides like pyrethroids. Now environmentalists want a similar ban imposed by the EPA in the United States.

That’s a terrible idea. The fact is, bee populations tend to fluctuate, especially by region, and “it’s normal for a beekeeper to lose part of his hive over the winter months,” notes University of Montana bee scientist Dr. Jerry Bromenshenk. Of course, beekeepers want to minimize such losses, to avoid having to replace too many bees or hives before the next pollination season begins. It’s also true that the United States did experience a 31% loss in managed bee colonies during the 2012-2013 winter season, according to the US Agriculture Department.

Major losses in beehives year after year make it hard for beekeepers to turn a profit, and many have left the industry. “We can replace the bees, but we can’t replace beekeepers with 40 years of experience,” says Tim Tucker, vice president of the American Beekeeping Federation. But all these are different issues from whether bees are dying off in unprecedented numbers, and what is causing the losses.

Moreover, even 30% losses do not mean bees are on the verge of extinction. In fact, “the number of managed honeybee colonies in the United States has remained stable over the past 15 years, at about 1.5 million” – with 20,000 to 30,000 bees per hive – says Bryan Walsh, author of the Time article.

That’s far fewer than the 5.8 million managed US hives in 1946. But this largely reflects competition from cheap imported honey from China and South America and “the general rural depopulation of the US over the past half-century,” Walsh notes. Extensive truck transport of managed hives, across many states and regions, to increasingly larger orchards and farms, also played a role in reducing managed hive numbers over these decades.

CCD cases began spiking in the USA in 2006, and beekeepers reported losing 30-90% of the bees in many hives. Thankfully, incidents of CCD are declining, and the mysterious phenomenon was apparently not a major factor over the past winter. But researchers are anxious to figure out what has been going on.

Both Australia and Canada rely heavily on neonicotinoid pesticides. However, Australia’s honeybees are doing so well that farmers are exporting queen bees to start new colonies around the world; Canadian hives are also thriving. Those facts suggest that these chemicals are not a likely cause. Bees are also booming in Africa, Asia and South America.

However, there definitely are areas where mass mortalities have been or remain a problem. Scientists and beekeepers are trying hard to figure out why that happens, and how future die-offs can be prevented.

Walsh’s article suggests several probable culprits. Topping his list is the parasitic Varroa destructor mite that has ravaged U.S. bee colonies for three decades. Another is American foulbrood bacteria that kill developing bees. Other suspects include small hive beetles, viral diseases, fungal infections, overuse of miticides, failure of beekeepers to stay on top of colony health, or even the stress of colonies constantly being moved from state to state. Yet another might be the fact that millions of acres are planted in monocultures – like corn, with 40% of the crop used for ethanol, and soybeans, with 12% used for biodiesel – creating what Walsh calls “deserts” that are devoid of pollen and nectar for bees.

A final suspect is the parasitic phorid fly, which lays eggs in bee abdomens. As larvae grow inside the bees, literally eating them alive, they affect the bees’ ability to function and cause them to walk around in circles, disoriented and with no apparent sense of direction. Biology professor John Hafernik’s San Francisco University research team said the “zombie-like” bees leave their hives at night, fly blindly toward light sources, and eventually die. The fly larvae then emerge from the dead bees.

The team found evidence of the parasitic fly in 77% of the hives they sampled in the San Francisco Bay area, and in some South Dakota and Central Valley, California, hives. In addition, many of the bees, phorid flies and larvae contained genetic traces from another parasite, as well as a virus that causes deformed wings. All these observations have been linked to colony collapse disorder.

But because this evidence doesn’t fit their anti-insecticide fund-raising appeals, radical environmentalists have largely ignored it. They have likewise ignored strong evidence that innovative neonicotinoid pest control products do not harm bees when they are used properly. Sadly, activist noise has deflected public and regulator attention away from Varroa mites, phorid flies, and other serious global threats to bees.

The good news is that the decline in CCD occurrence has some researchers thinking it’s a cyclical malady that is entering a downswing – or that colonies are developing resistance. The bottom line is that worldwide trends show bees are flourishing. “A world without bees” is not likely.

So now, as I said in a previous article on this topic, we need to let science do its job, and not jump to conclusions or short-circuit the process. We need answers, not scapegoats – or the recurring bee mortality problem is likely to spread, go untreated or even get worse.

_____________

Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of Eco-Imperialism: Green power – Black death.

From http://www.daff.gov.au/animal-plant-health/pests-diseases-weeds/bee/honeybees-FAQs — What effect has Varroa had on the number of managed bee hives in other countries?

Bee_Figure_1

Figure 1. The number of managed honey bee hives in the world from 1961-2008 (FAO Stat, 2011).

Varroa had no perceptible effect on the number of hives reported in Europe. The number of honey-bee hives in Europe declined sharply in the early 1990s, coinciding with the end of communism, and the end of state support for beekeepers, in the previously communist bloc countries of Eastern Europe. The number of hives reported Western European countries remained unchanged over the same period of time.

Bee_Figure_2Figure 2. The number of managed hives in the whole of Europe, former Warsaw Pact countries and former EU 15 member countries from 1961-2008 (Food and Agricultural Organization Stat, 2011).

In the United States the number of managed hives declined steadily since the late 1940s, around 40 years before Varroa became established there. This decline reflects declining terms of trade for United States beekeepers as the result of competition with lower-cost honey-producing countries in South America. In contrast, due to their competitive advantage, the number of hives in South America has grown steadily since the mid-1970s, despite Varroa already being established there. However, the J strain of V. destructor in South America is less damaging than the K strain of V. destructor in the United States.

Bee_Figure_3

Figure 3. The number of managed honey bee hives in the Unites States and South American countries from 1961-2008 (FAO Stat, 2011).

Aubrey de Grey Comments on the “Hallmarks of Aging” Paper – Article by Reason

Aubrey de Grey Comments on the “Hallmarks of Aging” Paper – Article by Reason

The New Renaissance Hat
Reason
September 8, 2013
******************************

The Hallmarks of Aging paper was published earlier this year. It is an outline by a group of noted researchers that divides up degenerative aging into what they believe are its fundamental causes, with extensive references to support their conclusions, and proposes research strategies aimed at building the means to address each of these causes. This is exactly what we want to see more of in the aging research community: deliberate, useful plans that follow the Strategies for Engineered Negligible Senescence (SENS) model of approaching aging.

Read through the Hallmarks of Aging and you’ll see that it is essentially a more mild-mannered and conservative restatement of the SENS approach to aging – written after more than ten years of advocacy and publication and persuasion within the scientific community by SENS supporters. To my eyes, the appearance of such things shows that SENS is winning the battle of ideas within the scientific community, and it is only a matter of time before it and similar repair-based efforts aimed at human rejuvenation dominate the field. Rightly so, too, and it can’t happen soon enough for my liking. SENS and SENS-like research is the only way we’re likely to see meaningful life extension technologies emerge before those of us in middle age now die, so the more of it taking place the better.

Aubrey de Grey, author of the original SENS proposals and now Chief Science Officer of the SENS Research Foundation that funds and guides rejuvenation research programs, is justifiably pleased by the existence of the Hallmarks of Aging. See this editorial in the latest Rejuvenation Research, for example:

A Divide-and-Conquer Assault on Aging: Mainstream at Last

Quote:

On June 6th, a review appeared concerning the state of aging research and the promising ways forward for the field. So far, so good. But this was not any old review. Here’s why: (a) it appeared in Cell, one of the most influential journals in biology; (b) it is huge by Cell’s standards – 24 pages, with well over 300 references; (c) all its five authors are exceptionally powerful opinion-formers – senior, hugely accomplished and respected scientists; (d) above all, it presents a dissection of aging into distinct (though inter-connected) processes and recommends a correspondingly multi-pronged (“divide and conquer”) approach to intervention.

It will not escape those familiar with SENS that this last feature is not precisely original, and it may arouse some consternation that no reference is made in the paper to that prior work. But do I care? Well, maybe a little – but really, hardly at all. SENS is not about me, nor even about SENS as currently formulated (though a depressing number of commentators in the field persist in presuming that it is). Rather, it is about challenging a profound, entrenched, and insidious dogma that has consumed biogerontology for the past 20 years, and which this new review finally – finally! – challenges (albeit somewhat diplomatically) with far more authority than I could ever muster.

Aging has been shown, over several decades, to consist of a multiplicity of loosely linked processes, implying that robust postponement of age-related ill-health requires a divide-and-conquer approach consisting of a panel of interventions. Because such an approach is really difficult to implement, gerontologists initially adopted a position of such extreme pessimism that all talk of intervention became unfashionable. The discovery of genetic and pharmacological ways to mimic [calorie restriction], after a brief period of confused disbelief, was so seductive as a way to raise the field’s profile that it was uncritically embraced as the fulcrum of translational gerontology for 20 years, but finally that particular emperor has been decisively shown to have no biomedically relevant clothes.

The publication of so authoritative a commentary adopting the “paleogerontological” position, that aging is indeed chaotic and complex and intervention will indeed require a panel of therapies, but now combined with evidence-based optimism as to the prospects for implementing such a panel, is a key step in the elevation of translational gerontology to a truly mature field.

In essence, as de Grey points out, work on aging has been following the wrong, slow, expensive, low-yield path for a couple of decades: the path of deciphering the mechanisms of calorie restriction and altering genes and metabolism to slightly slow down aging. This path cannot result in large gains in life expectancy and long-term health, and it cannot result in therapies that will greatly help people who are already old. What use is slowing down the accumulation of the damage of aging if you are already just a little more damage removed from death, and frail and suffering because of it, and the treatment will meaningfully alter none of that? If we want to add decades or more to our healthy life spans before we die, then rejuvenation and repair of damage are what is needed: ways to reverse frailty, remove suffering, and restore youthful function.

Reason is the founder of The Longevity Meme (now Fight Aging!). He saw the need for The Longevity Meme in late 2000, after spending a number of years searching for the most useful contribution he could make to the future of healthy life extension. When not advancing the Longevity Meme or Fight Aging!, Reason works as a technologist in a variety of industries.  

This work is reproduced here in accord with a Creative Commons Attribution license.  It was originally published on FightAging.org.

Longevitize!: The Master Compendium for the Life-Extension Movement – Post by G. Stolyarov II

Longevitize!: The Master Compendium for the Life-Extension Movement – Post by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
September 7, 2013
******************************

longevitize2013_med

Longevitize!: Essays on the Science, Philosophy & Politics of Longevity is a new and (literally) vital compilation, edited by Franco Cortese, which assembles perhaps the widest array of resources on radical life extension in one location. You can read a detailed description of the book here. Cortese’s ambitious projects have breathed new life into the transhumanist and immortalist movements, and Longevitize promises to be perhaps his most influential contribution to date, illustrating a thorough grasp of the current state of the efforts to defeat senescence and enable humankind to transcend its primordial limitations.

In addition to 164 articles representing diverse perspectives about the scientific, philosophical, and political aspects and implications of indefinite life extension, this compendium includes an immensity of links to external resources, including books, articles, and videos. I am proud that my Resources on Indefinite Life Extension (RILE) page formed the crux of the book’s Appendix II. Longevitize permits the reader to delve as deeply as can be desired into studying the feasibility, desirability, and possibilities for implementation of the defeat of senescence and involuntary death.

I am proud to have contributed 27 essays to this anthology, spanning 9 years of my thinking and writing on the prospect of indefinite longevity. In addition, the excellent cover was designed by my wife Wendy Stolyarov, incorporating Maxim Vorobiev’s 1842 painting, “Oak fractured by a lightning bolt. Allegory on wife’s death.” Death destroys our irreplaceable individual universes much like that lightning destroyed the tree. It is time to put an end to this travesty, and Longevitize offers an amazing toolkit and intellectual foundation for doing so. Buy this book, read it, and use it in your further intellectual explorations – including your writing, research, argumentation, and activism.

Right now, Longevitize! is available as an e-book for $9.99, both in PDF and MOBI formats, from Amazon. A hard-copy version is currently being prepared.

Against Monsanto, For GMOs – Video by G. Stolyarov II

Against Monsanto, For GMOs – Video by G. Stolyarov II

The depredations of the multinational agricultural corporation Monsanto are rightly condemned by many. But Mr. Stolyarov points out that arguments against Monsanto’s misbehavior are not valid arguments against genetically modified organisms (GMOs) as a whole.

References

– “Against Monsanto, For GMOs” – Essay by G. Stolyarov II
– “Monsanto – Legal actions and controversies” – Wikipedia
– “Copyright Term Extension Act” – Wikipedia
– “Electronic Arts discontinues Online Pass, a controversial form of video game DRM” – Sean Hollister – The Verge – May 15, 2013
– “Extinction” – Wikipedia

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 30, 2013
******************************
One of the most common arguments made against Transhumanism, Technoprogressivism, and the transformative potentials of emerging, converging, disruptive and transformative technologies may also be the weakest: technical infeasibility. While some thinkers attack the veracity of Transhumanist claims on moral grounds, arguing that we are committing a transgression against human dignity (in turn often based on ontological grounds of a static human nature that shan’t be tampered with) or on grounds of safety, arguing that humanity isn’t responsible enough to wield such technologies without unleashing their destructive capabilities, these categories of counter-argument (efficacy and safety, respectively) are more often than not made by people somewhat more familiar with the community and its common points of rhetoric.
***
In other words these are the real salient and significant problems needing to be addressed by Transhumanist and Technoprogressive communities. The good news is that the ones making the most progress in terms of deliberating the possible repercussions of emerging technologies are Transhumanist and Technoprogressive communities. The large majority of thinkers and theoreticians working on Existential Risk and Global Catastrophic Risk, like The Future of Humanity Institute and the Lifeboat Foundation, share Technoprogressive inclinations. Meanwhile, the largest proponents of the need to ensure wide availability of enhancement technologies, as well as the need for provision of personhood rights to non-biologically-substrated persons, are found amidst the ranks of Technoprogressive Think Tanks like the IEET.
***

A more frequent Anti-Transhumanist and Anti-Technoprogressive counter-argument, by contrast, and one most often launched by people approaching Transhumanist and Technoprogressive communities from the outside, with little familiarity with their common points of rhetoric, is the claim of technical infeasibility based upon little more than sheer incredulity.

Sometimes a concept or notion simply seems too unprecedented to be possible. But it’s just too easy for us to get stuck in a spacetime rut along the continuum of culture and feel that if something were possible, it would have either already happened or would be in the final stages of completion today. “If something is possible, when why hasn’t anyone done it Shouldn’t the fact that it has yet to be accomplished indicate that it isn’t possible?” This conflates ought with is (which Hume showed us is a fallacy) and ought with can. Ought is not necessarily correlative with either. At the risk of saying the laughably-obvious, something must occur at some point in order for it to occur at all. The Moon landing happened in 1969 because it happened in 1969, and to have argued in 1968 that it simply wasn’t possible solely because it had never been done before would not have been  a valid argument for its technical infeasibility.

If history has shown us anything, it has shown us that history is a fantastically poor indicator of what will and will not become feasible in the future. Statistically speaking, it seems as though the majority of things that were said to be impossible to implement via technology have nonetheless come into being. Likewise, it seems as though the majority of feats it was said to be possible to facilitate via technology have also come into being. The ability to possiblize the seemingly impossible via technological and methodological in(ter)vention has been exemplified throughout the course of human history so prominently that we might as well consider it a statistical law.

We can feel the sheer fallibility of the infeasibility-from-incredulity argument intuitively when we consider how credible it would have seemed a mere 100 years ago to claim that we would soon be able to send sentences into the air, to be routed to a device in your pocket (and only your pocket, not the device in the pocket of the person sitting right beside you). How likely would it have seemed 200 years ago if you claimed that 200 years hence it would be possible to sit comfortably and quietly in a chair in the sky, inside a large tube of metal that fails to fall fatally to the ground?

Simply look around you. An idiosyncratic genus of great ape did this! Consider how remarkably absurd it would seem for the gorilla genus to have coordinated their efforts to build skyscrapers; to engineer devices that took them to the Moon; to be able to send a warning or mating call to the other side of the earth in less time than such a call could actually be made via physical vocal cords. We live in a world of artificial wonder, and act as though it were the most mundane thing in the world. But considered in terms of geological time, the unprecedented feat of culture and artificial artifact just happened. We are still in the fledging infancy of the future, which only began when we began making it ourselves.
***

We have no reason whatsoever to doubt the eventual technological feasibility of anything, really, when we consider all the things that were said to be impossible yet happened, all the things that were said to be possible and did happen, and all the things that were unforeseen completely yet happened nonetheless. In light of history, it seems more likely than a given thing would eventually be possible via technology than that it wouldn’t ever be possible. I fully appreciate the grandeur of this claim – but I stand by it nonetheless. To claim that a given ability will probably not be eventually possible to implement via technology is to laugh in the face of history to some extent.

The main exceptions to this claim are abilities wherein you limit or specify the route of implementation. Thus it probably would not be eventually possible to, say, infer the states of all the atoms comprising the Eifel Tower from the state of a single atom in your fingernail: categories of ability where you specify the implementation as the end-ability – as in the case above, the end ability was to infer the state of all the atoms in the Eifel Tower from the state of a single atom.

These exceptions also serve to illustrate the paramount feature allowing technology to possiblize the seemingly improbable: novel means of implementation. Very often there is a bottleneck in the current system we use to accomplish something that limits the scope of tis abilities and prevents certain objectives from being facilitated by it. In such cases a whole new paradigm of approach is what moves progress forward to realizing that objective. If the goal is the reversal and indefinite remediation of the causes and sources of aging, the paradigms of medicine available at the turn of the 20th century would have seemed to be unable to accomplish such a feat.

The new paradigm of biotechnology and genetic engineering was needed to formulate a scientifically plausible route to the reversal of aging-correlated molecular damage – a paradigm somewhat non-inherent in the medical paradigms and practices common at the turn of the 20th Century. It is the notion of a new route to implementation, a wholly novel way of making the changes that could lead to a given desired objective, that constitutes the real ability-actualizing capacity of technology – and one that such cases of specified implementation fail to take account of.

One might think that there are other clear exceptions to this as well: devices or abilities that contradict the laws of physics as we currently understand them – e.g., perpetual-motion machines. Yet even here we see many historical antecedents exemplifying our short-sighted foresight in regard to “the laws of physics”. Our understanding of the physical “laws” of the universe undergo massive upheaval from generation to generation. Thomas Kuhn’s The Structure of Scientific Revolutions challenged the predominant view that scientific progress occurred by accumulated development and discovery when he argued that scientific progress is instead driven by the rise of new conceptual paradigms categorically dissimilar to those that preceded it (Kuhn, 1962), and which then define the new predominant directions in research, development, and discovery in almost all areas of scientific discovery and conceptualization.

Kuhn’s insight can be seen to be paralleled by the recent rise in popularity of Singularitarianism, which today seems to have lost its strict association with I.J. Good‘s posited type of intelligence explosion created via recursively self-modifying strong AI, and now seems to encompass any vision of a profound transformation of humanity or society through technological growth, and the introduction of truly disruptive emerging and converging (e.g., NBIC) technologies.

This epistemic paradigm holds that the future is less determined by the smooth progression of existing trends and more by the massive impact of specific technologies and occurrences – the revolution of innovation. Kurzweil’s own version of Singularitarianism (Kurzweil, 2005) uses the systemic progression of trends in order to predict a state of affairs created by the convergence of such trends, wherein the predictable progression of trends points to their own destruction in a sense, as the trends culminate in our inability to predict past that point. We can predict that there are factors that will significantly impede our predictive ability thereafter. Kurzweil’s and Kuhn’s thinking are also paralleled by Buckminster Fuller in his notion of ephemeralization (i.e., doing more with less), the post-industrial information economies and socioeconomic paradigms described by Alvin Toffler (Toffler, 1970), John Naisbitt (Naisbitt 1982), and Daniel Bell (Bell, 1973), among others.

It can also partly be seen to be inherent in almost all formulations of technological determinism, especially variants of what I call reciprocal technological determinism (not simply that technology determines or largely constitutes the determining factors of societal states of affairs, not simply that tech affects culture, but rather than culture affects technology which then affects culture which then affects technology) a là Marshall McLuhan (McLuhan, 1964) . This broad epistemic paradigm, wherein the state of progress is more determined by small but radically disruptive changes, innovation, and deviations rather than the continuation or convergence of smooth and slow-changing trends, can be seen to be inherent in variants of technological determinism because technology is ipso facto (or by its very defining attributes) categorically new and paradigmically disruptive, and if culture is affected significantly by technology, then it is also affected by punctuated instances of unintended radical innovation untended by trends.

That being said, as Kurzweil has noted, a given technological paradigm “grows out of” the paradigm preceding it, and so the extents and conditions of a given paradigm will to some extent determine the conditions and allowances of the next paradigm. But that is not to say that they are predictable; they may be inherent while still remaining non-apparent. After all, the increasing trend of mechanical components’ increasing miniaturization could be seen hundreds of years ago (e.g., Babbage knew that the mechanical precision available via the manufacturing paradigms of his time would impede his ability in realizing his Baggage Engine, but that its implementation would one day be possible by the trend of increasingly precise manufacturing standards), but the fact that it could continue to culminate in the ephemeralization of Bucky Fuller (Fuller, 1976) or the mechanosynthesis of K. Eric Drexler (Drexler, 1986).

Moreover, the types of occurrence allowed by a given scientific or methodological paradigm seem at least intuitively to expand, rather than contract, as we move forward through history. This can be seen lucidly in the rise of Quantum Physics in the early 20th Century, which delivered such conceptual affronts to our intuitive notions of the possible as non-locality (i.e., quantum entanglement – and with it quantum information teleportation and even quantum energy teleportation, or in other words faster-than-light causal correlation between spatially separated physical entities), Einstein’s theory of relativity (which implied such counter-intuitive notions as measurement of quantities being relative to the velocity of the observer, e.g., the passing of time as measured by clocks will be different in space than on earth), and the hidden-variable theory of David Bohm (which implied such notions as the velocity of any one particle being determined by the configuration of the entire universe). These notions belligerently contradict what we feel intuitively to be possible. Here we have claims that such strange abilities as informational and energetic teleportation, faster-than-light causality (or at least faster-than-light correlation of physical and/or informational states) and spacetime dilation are natural, non-technological properties and abilities of the physical universe.

Technology is Man’s foremost mediator of change; it is by and large through the use of technology that we expand the parameters of the possible. This is why the fact that these seemingly fantastic feats were claimed to be possible “naturally”, without technological implementation or mediation, is so significant. The notion that they are possible without technology makes them all the more fantastical and intuitively improbable.

We also sometimes forget the even more fantastic claims of what can be done through the use of technology, such as stellar engineering and mega-scale engineering, made by some of big names in science. There is the Dyson Sphere of Freeman Dyson, which details a technological method of harnessing potentially the entire energetic output of a star (Dyson,  1960). One can also find speculation made by Dyson concerning the ability for “life and communication [to] continue for ever, using a finite store of energy” in an open universe by utilizing smaller and smaller amounts of energy to power slower and slower computationally emulated instances of thought (Dyson, 1979).

There is the Tipler Cylinder (also called the Tipler Time Machine) of Frank J. Tipler, which described a dense cylinder of infinite length rotating about its longitudinal axis to create closed timelike curves (Tipler, 1974). While Tipler speculated that a cylinder of finite length could produce the same effect if rotated fast enough, he didn’t provide a mathematical solution for this second claim. There is also speculation by Tipler on the ability to utilize energy harnessed from gravitational shear created by the forced collapse of the universe at different rates and different directions, which he argues would allow the universe’s computational capacity to diverge to infinity, essentially providing computationally emulated humans and civilizations the ability to run for an infinite duration of subjective time (Tipler, 1986, 1997).

We see such feats of technological grandeur paralleled by Kurt Gödel, who produced an exact solution to the Einstein field equations that describes a cosmological model of a rotating universe (Gödel, 1949). While cosmological evidence (e.g., suggesting that our universe is not a rotating one) indicates that his solution doesn’t describe the universe we live in, it nonetheless constitutes a hypothetically possible cosmology in which time-travel (again, via a closed timelike curve) is possible. And because closed timelike curves seem to require large amounts of acceleration – i.e. amounts not attainable without the use of technology – Gödel’s case constitutes a hypothetical cosmological model allowing for technological time-travel (which might be non-obvious, since Gödel’s case doesn’t involve such technological feats as a rotating cylinder of infinite length, rather being a result derived from specific physical and cosmological – i.e., non-technological – constants and properties).

These are large claims made by large names in science (i.e., people who do not make claims frivolously, and in most cases require quantitative indications of their possibility, often in the form of mathematical solutions, as in the cases mentioned above) and all of which are made possible solely through the use of technology. Such technological feats as the computational emulation of the human nervous system and the technological eradication of involuntary death pale in comparison to the sheer grandeur of the claims and conceptualizations outlined above.

We live in a very strange universe, which is easy to forget midst our feigned mundanity. We have no excuse to express incredulity at Transhumanist and Technoprogressive conceptualizations considering how stoically we accept such notions as the existence of sentient matter (i.e., biological intelligence) or the ability of a genus of great ape to stand on extraterrestrial land.

Thus, one of the most common counter-arguments launched at many Transhumanist and Technoprogressive claims and conceptualizations – namely, technical infeasibility based upon nothing more than incredulity and/or the lack of a definitive historical precedent – is one of the most baseless counter-arguments as well. It would be far more credible to argue for the technical infeasibility of a given endeavor within a certain time-frame. Not only do we have little, if any, indication that a given ability or endeavor will fail to eventually become realizable via technology given enough development-time, but we even have historical indication of the very antithesis of this claim, in the form of the many, many instances in which a given endeavor or feat was said to be impossible, only to be realized via technological mediation thereafter.

It is high time we accepted the fallibility of base incredulity and the infeasibility of the technical-infeasibility argument. I remain stoically incredulous at the audacity of fundamental incredulity, for nothing should be incredulous to man, who makes his own credibility in any case, and who is most at home in the necessary superfluous.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

References

Bell, D. (1973). “The Coming of Post-Industrial Society: A Venture in Social Forecasting, Daniel Bell.” New York: Basic Books, ISBN 0-465-01281-7.

Dyson, F. (1960) “Search for Artificial Stellar Sources of Infrared Radiation”. Science 131: 1667-1668.

Dyson, F. (1979). “Time without end: Physics and biology in an open universe,” Reviews of Modern Physics 51 (3): 447-460.

Fuller, R.B. (1938). “Nine Chains to the Moon.” Anchor Books pp. 252–59.

Gödel, K. (1949). “An example of a new type of cosmological solution of Einstein’s field equations of gravitation”. Rev. Mod. Phys. 21 (3): 447–450.

Kuhn, Thomas S. (1962). “The Structure of Scientific Revolutions (1st ed.).” University of Chicago Press. LCCN 62019621.

Kurzweil, R. (2005). “The Singularity is Near.” Penguin Books.

Mcluhan, M. (1964). “Understanding Media: The Extensions of Man”. 1st Ed. McGraw Hill, NY.

Niasbitt, J. (1982). “Megatrends.” Ten New Directions Transforming Our Lives. Warner Books.

Tipler, F. (1974) “Rotating Cylinders and Global Causality Violation”. Physical Review D9, 2203-2206.

Tipler, F. (1986). “Cosmological Limits on Computation”, International Journal of Theoretical Physics 25 (6): 617-661.

Tipler, F. (1997). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. New York: Doubleday. ISBN 0-385-46798-2.

Toffler, A. (1970). “Future shock.” New York: Random House.

Intimations of Imitations: Visions of Cellular Prosthesis and Functionally Restorative Medicine – Article by Franco Cortese

Intimations of Imitations: Visions of Cellular Prosthesis and Functionally Restorative Medicine – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 23, 2013
******************************

In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion-Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.

In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e., ion-channels, ion-pumps, and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation, if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e., lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension, however, and the field is, to my knowledge, yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purposes of indefinite longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.

While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 1970s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.

The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity, and permeability throughout the 1980s, 1990s, and 2000s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and to a lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].

Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpose of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to — rather than replace existing biological neurons — become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion in general had already been conceived (and successfully created/experimentally verified, though presumably not integrated in vivo).

The field of Functionally Restorative Medicine (and the orphan sub-field of whole-brain gradual-substrate replacement, or “physically embodied” brain-emulation, if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels, and ion pumps) by MEMS (micro-electrocal-mechanical systems) or more likely NEMS (nano-electro-mechanical systems).

The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g., bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e., dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological, and experimental infrastructures developed for the fields of Synthetic Ion Channels and Ion-Channel/Artificial-Membrane Reconstitution can be utilized for the purpose of (a) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional and operational modality to their normal biological counterparts) and/or (b) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.

Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.

A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.

For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.

First Survey

Unimolecular ion channels:

Examples include (a) synthetic ion channels with oligocrown ionophores, [5] (b) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and (c) modified varieties of the b-helical scaffold of gramicidin A [7].

Barrel-stave supramolecules:

Examples of this general class falling include voltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].

Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:

Examples of this sub-class include synthetic ion channels formed by (a) macrocyclic, branched and linear bolaamphiphiles, and dimeric steroids, [9] and by (b) non-peptide macrocycles, acyclic analogs, and peptide macrocycles (respectively) containing abiotic amino acids [10].

Dimeric steroid staves:

Examples of this sub-class include channels using polydroxylated norcholentriol dimers [11].

p-Oligophenyls as staves in rigid-rod ß-barrels:

Examples of this sub-class include “cylindrical self-assembly of rigid-rod ß-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].

Synthetic polymers:

Examples of this sub-class include synthetic ion channels and pores comprised of (a) polyalanine, (b) polyisocyanates, (c) polyacrylates, [13] formed by (i) ionophoric, (ii) ‘smart’, and (iii) cationic polymers [14]; (d) surface-attached poly(vinyl-n-alkylpyridinium) [15]; (e) cationic oligo-polymers [16], and (f) poly(m-phenylene ethylenes) [17].

Helical b-peptides (used as staves in barrel-stave method):

Examples of this class include cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].

Monomeric steroids:

Examples of this sub-class include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics that may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21], and supramolecular ion channels [22].

Complex minimalist systems:

Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self-assemble into supramolecular pores or exhibit antibiotic activity [25].

Non-peptide macrocycles as hoops:

Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].

Peptide macrocycles as hoops and staves:

Examples of this sub-class include (a) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic ß-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; (b) synthetic carriers, antibiotics (and analogs), and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. Certain macrocycles may act as ß-sheets, possibly as staves of ß-barrel-like pores [29]; (c) bioengineered pores as sensors. Covalent capturing and fragmentations have been observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].

Summary

Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized, and refined.

Second Survey

The following selections [31] illustrate the chemical, structural, and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration, and experimental verification already existent. Permission to use all the following selections and figures was obtained from the author of the source.

There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:

A: Unimolecular macromolecules,
B: Complex barrel-stave,
C: Barrel-rosette,
D: Barrel hoop, and
E: Micellar supramolecules.

Cation Conducting Channels:

UNIMOLECULAR

“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].

“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono– (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation-selective and voltage-dependent” [34].

“Later, Kokube et al. synthesized another channel comprising of resorcinol-based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to form a dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].

“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chains as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].

“A covalently bonded macrotetracycle (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].

“Inorganic derivative using crown ethers have also been synthesized. Hall et al. synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45].

B-STAVES:

“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D– and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].

“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel ß-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].

“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].

“By attaching peptides to the octiphenyl scaffold, a ß-barrel can be formed via self-assembly through the formation of ß-sheet structures between the peptide chains (Figure 1.13)” [53].

“The same scaffold was used by Matile et al. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].

“Attaching the electron-poor naphthalene diimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the complementary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].

MICELLAR

“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].

“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). Figure 1.15 shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].

ANION CONDUCTING CHANNELS:

“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al. (Figure 1.16). Oligoether chains were attached to the primary face of the ß-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I- > Br- > Cl- (following Hofmeister series)” [61].

“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-” [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].

Cellular Prosthesis: Inklings of a New Interdisciplinary Approach

The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.

However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally Restorative Medicine is possible through taking the technologies and techniques involved in constructing, integrating, and experimentally verifying either (a) non-biological analogues of ion-channels and ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and (b) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neocortex and prefrontal cortex, and through iterative procedures of gradual replacement thereby achieving indefinite longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e., design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in vivo.

The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e., the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic, and/or molecular physically embodied) and functional emulation of biological cells.

The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the fields of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in the fields of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e., sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA, and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps, and sections of bilipid membrane) directly.

Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially synthesized but chemically/structurally equivalent biological analogues.

This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally Restorative Medicine. I suggest the designation ‘cellular prosthesis’.

References:

[1] Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.

[2] Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.

[3] Matile, S., Som, A., & Sorde, N. (2004). Recent synthetic ion channels and pores. Tetrahedron, 60(31), 6405–6435. ISSN 0040–4020, 10.1016/j.tet.2004.05.052. Access: http://www.sciencedirect.com/science/article/pii/S0040402004007690:

[4] XIAO, F., (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[5] Ibid., p. 6411.

[6] Ibid., p. 6416.

[7] Ibid., p. 6413.

[8] Ibid., p. 6412.

[9] Ibid., p. 6414.

[10] Ibid., p. 6425.

[11] Ibid., p. 6427.

[12] Ibid., p. 6416.

[13] Ibid., p. 6419.

[14] Ibid.

[15] Ibid.

[16] Ibid., p. 6419.

[17] Ibid.

[18] Ibid., p. 6421.

[19] Ibid., p. 6422.

[20] Ibid.

[21] Ibid.

[22] Ibid.

[23] Ibid., p. 6423.

[24] Ibid.

[25] Ibid.

[26] Ibid., p. 6426.

[27] Ibid.

[28] Ibid., p. 6427.

[29] Ibid., p. 6327.

[30] Ibid., p. 6427.

[31] XIAO, F. (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[32] Ibid., p. 4.

[33] Ibid.

[34] Ibid.

[35] Ibid.

[36] Ibid., p. 7.

[37] Ibid., p. 8.

[38] Ibid., p. 7.

[39] Ibid.

[40] Ibid.

[41] Ibid.

[42] Ibid.

[43] Ibid., p. 8.

[44] Ibid.

[45] Ibid., p. 9.

[46] Ibid.

[47] Ibid.

[48] Ibid., p. 10.

[49] Ibid.

[50] Ibid.

[51] Ibid.

[52] Ibid., p. 11.

[53] Ibid., p. 12.

[54] Ibid.

[55] Ibid.

[56] Ibid.

[57] Ibid.

[58] Ibid., p. 13.

[59] Ibid.

[60] Ibid., p. 14.

[61] Ibid.

[62] Ibid.

[63] Ibid., p. 15.

[64] Ibid.

[65] Ibid.

[66] Cortese, F., (2013). Concepts for Functional Replication of Biological Neurons. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/gradual-neuron-replacement/

[67] Cortese, F., (2013). Gradual Neuron Replacement for the Preservation of Subjective-Continuity. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/gradual-neuron-replacement/

[68] Cortese, F., (2013). Wireless Synapses, Artificial Plasticity, and Neuromodulation. The Rational Argumentator. Access: https://www.rationalargumentator.com/index/blog/2013/05/wireless-synapses/

[69] Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339

Against Monsanto, For GMOs – Article by G. Stolyarov II

Against Monsanto, For GMOs – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
June 9, 2013
******************************

                The depredations of the multinational agricultural corporation Monsanto are rightly condemned by many. Monsanto is a prominent example of a crony corporation – a company that bolsters its market dominance not through honest competition and innovation, but through the persistent use of the political and legal system to enforce its preferences against its competitors and customers. Most outrageous is Monsanto’s stretching of patents beyond all conceivable limits – attempting to patent genes and life forms and to forcibly destroy the crops of farmers who replant seeds from crops originally obtained from Monsanto.

                Yet because Monsanto is one of the world’s leading producers of genetically modified crops, campaigners who oppose all genetically modified organisms (GMOs) often use Monsanto as the poster child for the problems with GMOs as a whole. The March Against Monsanto, which took place in cities worldwide in late May of 2013, is the most recent prominent example of this conflation. The blanket condemnation of GMOs because of Monsanto’s misbehavior is deeply fallacious. The policy of a particular company does not serve to discredit an entire class of products, just because that company produces those products – even if it could be granted that the company’s actions result in its own products being more harmful than they would otherwise be.

                GMOs, in conventional usage, are any life forms which have been altered through techniques more advanced than the kind of selective breeding which has existed for millennia. In fact, the only material distinction between genetic engineering and selective breeding is in the degree to which the procedure is targeted toward specific features of an organism. Whereas selective breeding is largely based on observation of the organism’s phenotype, genetic engineering relies on more precise manipulation of the organism’s DNA. Because of its ability to more closely focus on specific desirable or undesirable attributes, genetic engineering is less subject to unintended consequences than a solely macroscopic approach. Issues of a particular company’s abuse of the political system and its attempts to render the patent system ever more draconian do not constitute an argument against GMOs or the techniques used to create them.

                Consider that Monsanto’s behavior is not unique; similar depredations are found throughout the status quo of crony corporatism, where many large firms thrive not on the basis of merit, but on the basis of political pull and institutionalized coercion. Walt Disney Corporation has made similar outrageous (and successful) attempts to extend the intellectual-property system solely for its own benefit. The 1998 Copyright Term Extension Act was primarily motivated by Disney’s lobbying to prevent the character of Mickey Mouse from entering the public domain. Yet are all films, and all animated characters, evil or wrong because of Disney’s manipulation of the legal system instead of competing fairly and honestly on the market? Surely, to condemn films on the basis of Disney’s behavior would be absurd.

                Consider, likewise, Apple Corporation, which has attempted to sue its competitors’ products out of existence and to patent the rectangle with rounded corners – a geometric shape which is no less basic an idea in mathematics than a trapezoid or an octagon. Are all smartphones, tablet computers, MP3 players, and online music services – including those of Apple’s competitors – wrong and evil solely because of Apple’s unethical use of the legal system to squelch competition? Surely not! EA Games, until May 2013, embedded crushingly restrictive digital-rights management (DRM) into its products, requiring a continuous Internet connection (and de facto continual monitoring of the user by EA) for some games to be playable at all. Are all computer games and video games evil and wrong because of EA’s intrusive anti-consumer practices? Should they all be banned in favor of only those games that use pre-1950s-era technology – e.g., board games and other table-top games? If the reader does not support the wholesale abolition, or even the limitation, of films, consumer electronics, and games as a result of the misbehavior of prominent makers of these products, then what rationale can there possibly be for viewing GMOs differently?

                Indeed, the loathing of all GMOs stems from a more fundamental fallacy, for which any criticism of Monsanto only provides convenient cover. That fallacy is the assumption that “the natural” – i.e., anything not affected by human technology, or, more realistically, human technology of sufficiently recent origin – is somehow optimal for human purposes or simply for its own sake. While it is logically conceivable that some genetic modifications to organisms could render them more harmful than they would otherwise be (though there has never been any evidence of such harms arising despite the trillions of servings of genetically modified foods consumed to date), the condemnation of all genetic modifications using techniques from the last 60 years is far more sweeping than this. Such condemnation is not and cannot be scientific; rather, it is an outgrowth of the indiscriminate anti-technology agenda of the anti-GMO campaigners. A scientific approach, based on experimentation, empirical observation, and the immense knowledge thus far amassed regarding chemistry and biology, might conceivably give rise to a sophisticated classification of GMOs based on gradations of safety, safe uses, unsafe uses, and possible yet-unknown risks. The anti-GMO campaigners’ approach, on the other hand, can simply be summarized as “Nature good – human technology bad” – not scientific or discerning at all.

                The reverence for purportedly unaltered “nature” completely ignores the vicious, cruel, appallingly wasteful (not even to mention suboptimal) conditions of any environment untouched by human influence. After all, 99.9% of all species that ever existed are extinct – the vast majority from causes that arose long before human beings evolved. The plants and animals that primitive hunter-gatherers consumed did not evolve with the intention of providing optimal nutrition for man; they simply happened to be around, attainable for humans, and nutritious enough that humans did not die right away after consuming them – and some humans (the ones that were not poisoned, or killed hunting, or murdered by their fellow men) managed to survive to reproductive age by eating these “natural” foods. Just because the primitive “paleo” diet of our ancestors enabled them to survive long enough to trigger the chain of events that led to us, does not render their lives, or their diets, ideal for emulation in every aspect. We can do better. We must do better – if protection of large numbers of human beings from famine, drought, pests, and prohibitive costs of food is to be considered a moral priority in the least. By depriving human beings of the increased abundance, resilience, and nutritional content that only the genetic modification of foods can provide, anti-GMO campaigners would sentence millions – perhaps billions – of humans to the miserable subsistence conditions and tragically early deaths of their primeval forebears, of whom the Earth could support only a few million without human agricultural interventions.

                We do not need to like Monsanto in order to embrace the life-saving, life-enhancing potential of GMOs. We need to consider the technology involved in GMOs on its own terms, imagining how we would view it if it could be delivered by economic arrangements we would prefer. As a libertarian individualist, I advocate for a world in which GMOs could be produced by thousands of competing firms, each fairly trying to win the business of consumers through the creation of superior products which add value to people’s lives. If you are justifiably concerned about the practices of Monsanto, consider working toward a world like that, instead of a world where the promise of GMOs is denied to the billions who currently owe their very existences to human technology and ingenuity.

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 3, 2013
******************************
This essay is the tenth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first nine chapters were previously published on The Rational Argumentator under the following titles:
***

Operational Continuity

One of the reasons for continuing conceptual development of the physical-functionalist NRU (neuron-replication-unit) approach, despite the perceived advantages of the informational-functionalist approach, was in the event that computational emulation would either fail to successfully replicate a given physical process (thus a functional-modality concern) or fail to successfully maintain subjective-continuity (thus an operational-modality concern), most likely due to a difference in the physical operation of possible computational substrates compared to the physical operation of the brain (see Chapter 2). In regard to functionality, we might fail to computationally replicate (whether in simulation or emulation) a relevant physical process for reasons other than vitalism. We could fail to understand the underlying principles governing it, or we might understand its underlying principles so as to predictively model it yet still fail to understand how it affects the other processes occurring in the neuron—for instance if we used different modeling techniques or general model types to model each component, effectively being able to predictively model each individually while being unable to model how they affect eachother due to model untranslatability. Neither of these cases precludes the aspect in question from being completely material, and thus completely potentially explicable using the normative techniques we use to predictively model the universe. The physical-functionalist approach attempted to solve these potential problems through several NRU sub-classes, some of which kept certain biological features and functionally replaced certain others, and others that kept alternate biological features and likewise functionally replicated alternate biological features. These can be considered as varieties of biological-nonbiological NRU hybrids that functionally integrate those biological features into their own, predominantly non-biological operation, as they exist in the biological nervous system, which we failed to functionally or operationally replicate successfully.

The subjective-continuity problem, however, is not concerned with whether something can be functionally replicated but with whether it can be functionally replicated while still retaining subjective-continuity throughout the procedure.

This category of possible basis for subjective-continuity has stark similarities to the possible problematic aspects (i.e., operational discontinuity) of current computational paradigms and substrates discussed in Chapter 2. In that case it was postulated that discontinuity occurred as a result of taking something normally operationally continuous and making it discontinuous: namely, (a) the fact that current computational paradigms are serial (whereas the brain has massive parallelism), which may cause components to only be instantiated one at a time, and (b) the fact that the resting membrane potential of biological neurons makes them procedurally continuous—that is, when in a resting or inoperative state they are still both on and undergoing minor fluctuations—whereas normative logic gates both do not produce a steady voltage when in an inoperative state (thus being procedurally discontinuous) and do not undergo minor fluctuations within such a steady-state voltage (or, more generally, a continuous signal) while in an inoperative state. I had a similar fear in regard to some mathematical and computational models as I understood them in 2009: what if we were taking what was a continuous process in its biological environment, and—by using multiple elements or procedural (e.g., computational, algorithmic) steps to replicate what would have been one element or procedural step in the original—effectively making it discontinuous by introducing additional intermediate steps? Or would we simply be introducing a number of continuous steps—that is, if each element or procedural step were operationally continuous in the same way that the components of a neuron are, would it then preserve operational continuity nonetheless?

This led to my attempting to develop a modeling approach aiming to retain the same operational continuity as exists in biological neurons, which I will call the relationally isomorphic mathematical model. The biophysical processes comprising an existing neuron are what implements computation; by using biophysical-mathematical models as our modeling approach, we might be introducing an element of discontinuity by mathematically modeling the physical processes giving rise to a computation/calculation, rather than modeling the computation/calculation directly. It might be the difference between modeling a given program, and the physical processes comprising the logic elements giving rise to the program. Thus, my novel approach during this period was to explore ways to model this directly.

Rather than using a host of mathematical operations to model the physical components that themselves give rise to a different type of mathematics, we instead use a modeling approach that maintains a 1-to-1 element or procedural-step correspondence with the level-of-scale that embodies the salient (i.e., aimed-for) computation. My attempts at developing this produced the following approach, though I lack the pure mathematical and computer-science background to judge its true accuracy or utility. The components, their properties, and the inputs used for a given model (at whatever scale) are substituted by numerical values, the magnitude of which preserves the relationships (e.g., ratio relationships) between components/properties and inputs, and by mathematical operations which preserve the relationships exhibited by their interaction. For instance: if the interaction between a given component/property and a given input produces an emergent inhibitory effect biologically, then one would combine them to get their difference or their factors, respectively, depending on whether they exemplify a linear or nonlinear relationship. If the component/property and the input combine to produce emergently excitatory effects biologically, one would combine them to get their sum or products, respectively, depending on whether they increased excitation in a linear or nonlinear manner.

In an example from my notes, I tried to formulate how a chemical synapse could be modeled in this way. Neurotransmitters are given analog values such as positive or negative numbers, the sign of which (i.e., positive or negative) depends on whether it is excitatory or inhibitory and the magnitude of which depends on how much more excitatory/inhibitory it is than other neurotransmitters, all in reference to a baseline value (perhaps 0 if neutral or neither excitatory nor inhibitory; however, we may need to make this a negative value, considering that the neuron’s resting membrane-potential is electrically negative, and not electrochemically neutral). If they are neurotransmitter clusters, then one value would represent the neurotransmitter and another value its quantity, the sum or product of which represents the cluster. If the neurotransmitter clusters consist of multiple neurotransmitters, then two values (i.e., type and quantity) would be used for each, and the product of all values represents the cluster. Each summative-product value is given a second vector value separate from its state-value, representing its direction and speed in the 3D space of the synaptic junction. Thus by summing the products of all, the numerical value should contain the relational operations each value corresponds to, and the interactions and relationships represented by the first- and second-order products. The key lies in determining whether the relationship between two elements (e.g., two neurotransmitters) is linear (in which case they are summed), or nonlinear (in which case they are combined to produce a product), and whether it is a positive or negative relationship—in which case their factor, rather than their difference, or their product, rather than their sum, would be used. Combining the vector products would take into account how each cluster’s speed and position affects the end result, thus effectively emulating the process of diffusion across the synaptic junction. The model’s past states (which might need to be included in such a modeling methodology to account for synaptic plasticity—e.g., long-term potentiation and long-term modulation) would hypothetically be incorporated into the model via a temporal-vector value, wherein a third value (position along a temporal or “functional”/”operational” axis) is used when combining the values into a final summative product. This is similar to such modeling techniques as phase-space, which is a quantitative technique for modeling a given system’s “system-vector-states” or the functional/operational states it has the potential to possess.

How excitatory or inhibitory a given neurotransmitter is may depend upon other neurotransmitters already present in the synaptic junction; thus if the relationship between one neurotransmitter and another is not the same as that first neurotransmitter and an arbitrary third, then one cannot use static numerical values for them because the sequence in which they were released would affect how cumulatively excitatory or inhibitory a given synaptic transmission is.

A hypothetically possible case of this would be if one type of neurotransmitter can bond or react with two or more types of neurotransmitter. Let’s say that it’s more likely to bond or react with one than with the other. If the chemically less attractive (or reactive) one were released first, it would bond anyways due to the absence of the comparatively more chemically attractive one, such that if the more attractive one were released thereafter, then it wouldn’t bond because the original one would have already bonded with the chemically less attractive one.

If a given neurotransmitter’s numerical value or weighting is determined by its relation to other neurotransmitters (i.e., if one is excitatory, and another is twice as excitatory, then if the first was 1.5, the second would be 3—assuming a linear relationship), and a given neurotransmitter does prove to have a different relationship to one neurotransmitter than it does another, then we cannot use a single value for it. Thus we might not be able to configure it such that the normative mathematical operations follow naturally from each other; instead, we may have to computationally model (via the [hypothetically] subjectively discontinuous method that incurs additional procedural steps) which mathematical operations to perform, and then perform them continuously without having to stop and compute what comes next, so as to preserve subjective-continuity.

We could also run the subjectively discontinuous model at a faster speed to account for its higher quantity of steps/operations and the need to keep up with the relationally isomorphic mathematical model, which possesses comparatively fewer procedural steps. Thus subjective-continuity could hypothetically be achieved (given the validity of the present postulated basis for subjective-continuity—operational continuity) via this method of intermittent external intervention, even if we need extra computational steps to replicate the single informational transformations and signal-combinations of the relationally isomorphic mathematical model.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 2, 2013
******************************
This essay is the ninth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first eight chapters were previously published on The Rational Argumentator under the following titles:
***

The two approaches falling within this class considered thus far are (a) computational models that model the biophysical (e.g., electromagnetic, chemical, and kinetic) operation of the neurons—i.e., the physical processes instantiating their emergent functionality, whether at the scale of tissues, molecules and/or atoms, and anything in between—and (b) abstracted models, a term which designates anything that computationally models the neuron using the (sub-neuron but super-protein-complex) components themselves as the chosen model-scale (whereas the latter uses for its chosen model-scale the scale at which physical processes emergently instantiating those higher-level neuronal components exist, such as the membrane and individual proteins forming the transmembrane protein-complexes), regardless of whether each component is abstracted as a normative-electrical-component analogue (i.e., using circuit diagrams in place of biological schematics, like equating the lipid bilayer membrane with a capacitor connected to a variable battery) or mathematical models in which a relevant component or aspect of the neuron becomes a term (e.g., a variable or constant) in an equation.

It was during the process of trying to formulate different ways of mathematically (and otherwise computationally) modeling neurons or sub-neuron regions that I laid the conceptual embryo of the first new possible basis for subjective-continuity: the notion of operational isomorphism.

A New Approach to Subjective-Continuity Through Substrate Replacement

There are two other approaches to increasing the likelihood of subjective-continuity, each based on the presumption of two possible physical bases for discontinuity, that I explored during this period. Note that these approaches are unrelated to graduality, which has been the main determining factor impacting the likelihood of subjective-continuity considered thus far. The new approaches consist of designing the NRUs so as to retain the respective postulated physical bases for subjective-continuity that exist in the biological brain. Thus they are unrelated to increasing the efficacy of the gradual-replacement procedure itself, instead being related to the design requirements of functional-equivalents used to gradually replace the neurons that maintain immediate subjective-continuity.

Operational Isomorphism

Whereas functionality deals only with the emergent effects or end-product of a given entity or process, operationality deals with the procedural operations performed so as to give rise to those emergent effects. A mathematical model of a neuron might be highly functionally equivalent while failing to be operationally equivalent in most respects. Isomorphism can be considered a measure of “sameness”, but technically means a 1-to-1 correspondence between the elements of two sets (which would correspond with operational isomorphism) or between the sums or products of the elements of two sets (which would correspond with functional isomorphism, using the definition of functionality employed above). Thus, operational isomorphism is the degree with which the sub-components (be they material as in entities or procedural as in processes) of the two larger-scale components, or the operational modalities possessed by each respective collection of sub-components, are equivalent.

To what extent does the brain possess operational isomorphism? It seems to depend on the scale being considered. At the highest scale, different areas of the nervous system are classed as systems (as in functional taxonomies) or regions (as in anatomical taxonomies). At this level the separate regions (i.e., components of a shared scale) differ widely from one another in terms of operational-modality; they process information very differently from the way other components on the same scale process information. If this scale was chosen as the model-scale of our replication-approach and the preceding premise (that the physical basis for subjective-continuity is the degree of operational isomorphism between components at a given scale) is accepted, then we would in such a case have a high probability of replicating functionality, but a low probability of retaining subjective-continuity through gradual replacement. This would be true even if we used the degree of operational isomorphism between separate components as the only determining factor for subjective-continuity, and ignored concerns of graduality (e.g., the scale or rate—or scale-to-rate ratio—at which gradual substrate replacement occurs).

Contrast this to the molecular scale, where the operational modality of each component (being a given molecule) and the procedural rules determining the state-changes of components at this scale are highly isomorphic. The state-changes of a given molecule are determined by molecular and atomic forces. Thus if we use an informational-functionalist approach, choose a molecular scale for our model, and accept the same premises as the first example, we would have a high probability of both replicating functionality and retaining subjective-continuity through gradual replacement because the components (molecules) have a high degree of operational isomorphism.

Note that this is only a requirement for the sub-components instantiating the high-level neural regions/systems that embody our personalities and higher cognitive faculties such as the neocortex — i.e., we wouldn’t have to choose a molecular scale as our model scale (if it proved necessary for the reasons described above) for the whole brain, which would be very computationally intensive.

So at the atomic and molecular scale the brain possesses a high degree of operational isomorphism. On the scale of the individual protein complexes, which collectively form a given sub-neuronal component (e.g., ion channel), components still appear to possess a high degree of operational isomorphism because all state-changes are determined by the rules governing macroscale proteins and protein-complexes (i.e., biochemistry and particularly protein-protein interactions); by virtue of being of the same general constituents (amino acids), the factors determining state-changes at this level are shared by all components at this scale. The scale of individual neuronal components, however, seems to possess a comparatively lesser degree of operational isomorphism. Some ion channels are ligand-gated while others are voltage-gated. Thus, different aspects of physicality (i.e., molecular shape and voltage respectively) form the procedural-rules determining state-changes at this scale. Since there are now two different determining factors at this scale, its degree of operational isomorphism is comparatively less than the protein and protein-complex scale and the molecular scale, both of which appear to have only one governing procedural-rule set. The scale of individual neurons by contrast appears to possess a greater degree of operational isomorphism; every neuron fires according to its threshold value, and sums analog action-potential values into a binary output (i.e., neuron either fires or does not). All individual neurons operate in a highly isomorphic manner. Even though individual neurons of a given type are more operationally isomorphic in relation to each other than with a neuron of another type, all neurons regardless of type still act in a highly isomorphic manner. However, the scale of neuron-clusters and neural-networks, which operate and communicate according to spatiotemporal sequences of firing patterns (action-potential patterns), appears to possess a lesser degree of operational isomorphism compared to individual neurons, because different sequences of firing patterns will mean a different thing to two respective neural clusters or networks. Also note that at this scale the degree of functional isomorphism between components appears to be less than their degree of operational isomorphism—that is, the way each cluster or network operates is more similar in relation to each other than is their actual function (i.e., what they effectively do). And lastly, at the scale of high-level neural regions/systems, components (i.e., neural regions) differ significantly in morphology, in operationality, and in functionality; thus they appear to constitute the scale that possesses the least operational isomorphism.

I will now illustrate the concept of operational isomorphism using the physical-functionalist and the informational-functionalist NRU approaches, respectively, as examples. In terms of the physical-functionalist (i.e., prosthetic neuron) approach, both the passive (i.e., “direct”) and CPU-controlled sub-classes, respectively, are operationally isomorphic. An example of a physical-functionalist NRU that would not possess operational isomorphism is one that uses a passive-physicalist approach for the one type of component (e.g., voltage-gated ion channel) and a CPU-controlled/cyber-physicalist approach [see Part 4 of this series] for another type of component (e.g., ligand-gated ion channel)—on that scale the components act according to different technological and methodological infrastructures, exhibit different operational modalities, and thus appear to possess a low degree of operational isomorphism. Note that the concern is not the degree of operational isomorphism between the functional-replication units and their biological counterparts, but rather with the degree of operational isomorphism between the functional-replication units and other units on the same scale.

Another possibly relevant type of operational isomorphism is the degree of isomorphism between the individual sub-components or procedural-operations (i.e., “steps”) composing a given component, designated here as intra-operational isomorphism. While very similar to the degree of isomorphism for the scale immediately below, this differs from (i.e., is not equivalent to) such a designation in that the sub-components of a given larger component could be functionally isomorphic in relation to each other without being operationally isomorphic in relation to all other components on that scale. The passive sub-approach of the physical-functionalist approach would possess a greater degree of intra-operational isomorphism than would the CPU-controlled/cyber-physicalist sub-approach, because presumably each component would interact with the others (via physically embodied feedback) according to the same technological and methodological infrastructure—be it mechanical, electrical, chemical, or otherwise. The CPU-controlled sub-approach by contrast would possess a lesser degree of intra-operational-isomorphism, because the sensors, CPU, and the electric or electromechanical systems, respectively (the three main sub-components for each singular neuronal component—e.g., an artificial ion channel), operate according to different technological and methodological infrastructures and thus exhibit alternate operational modalities in relation to eachother.

In regard to the informational-functionalist approach, an NRU model that would be operationally isomorphic is one wherein, regardless of the scale used, the type of approach used to model a given component on that scale is as isomorphic with the ones used to model other components on the same scale as is possible. For example, if one uses a mathematical model to simulate spiking regions of the dendritic spine, then one shouldn’t use a non-mathematical (e.g., strict computational-logic) approach to model non-spiking regions of the dendritic spine. Since the number of variations to the informational-functionalist approach is greater than could exist for the physical-functionalist approach, there are more gradations to the degree of operational isomorphism. Using the exact same branches of mathematics to mathematically model the two respective components would incur a greater degree of operational isomorphism than if we used alternate mathematical techniques from different disciplines to model them. Likewise, if we used different computational approaches to model the respective components, then we would have a lesser degree of operational isomorphism. If we emulated some components while merely simulating others, we would have a lesser degree of operational isomorphism than if both were either strictly simulatory or strictly emulatory.

If this premise proves true, it suggests that when picking the scale of our replication-approach (be it physical-functionalist or informational-functionalist), we choose a scale that exhibits operational isomorphism—for example, the molecular scale rather than the scale of high-level neural-regions, and that we don’t use widely dissimilar types of modeling techniques to model one component (e.g., a molecular system) than we do for another component on the same scale.

Note that unlike operational-continuity, the degree of operational isomorphism was not an explicit concept or potential physical basis for subjective-continuity at the time of my working on immortality (i.e., this concept wasn’t yet fully fleshed out in 2010), but rather was formulated in response to going over my notes from this period so as to distill the broad developmental gestalt of my project; though it appears to be somewhat inherent (i.e., appears to be hinted at), it hasn’t been explicitized until relatively recently.

The next chapter describes the rest of my work on technological approaches to techno-immortality in 2010, focusing on a second new approach to subjective-continuity through a gradual-substrate-replacement procedure, and concluding with an overview of the ways my project differs from the other techno-immortalist projects.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.