Browsed by
Tag: nanotechnology

Public Opposition to Biotech Endangers Your Life and Health – Article by Edward Hudgins

Public Opposition to Biotech Endangers Your Life and Health – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
******************************

Do you want to be smarter, healthier, and live longer? Remarkably, a new Pew survey found that most Americans answer “No!” if it requires using certain new technologies. This is a wakeup call for scientists, Silicon Valley entrepreneurs, transhumanists, and all of us who value our lives: we must fight for our lives on the battlefield of values.

CRISPRWorries about human enhancement

We all understand how information technology has transformed our world with PCs, smartphones, the Internet, and Google. Nanotech, robotics, artificial intelligence, and, especially, genetic engineering are poised to unleash the next wave of wealth creation and improvements of the human condition.

But a new Pew survey entitled U.S. Public Wary of Biomedical Technologies to “Enhance” Human Abilities found that “Majorities of U.S. adults say they would be ‘very’ or ‘somewhat’ worried about gene editing (68%), brain chips (69%) and synthetic blood (63%),” technologies that in years to come could make us healthier, smarter, and stronger. While some say they “would be both enthusiastic and worried … overall, concern outpaces excitement.” Further, “More say they would not want enhancements of their brains and their blood (66% and 63%, respectively) than say they would want them (32% and 35%).”

Simply a reflection of individuals making decisions about their own lives, as is their right? Not quite. Their concerns about technology are already causing cultural and political pushback from left and right that could derail the advances sought by those of us who want better lives.

The Pew data reveals two ideological sources of opposition to new technologies.

Religion and meddling with nature

brain.chip_.grids_The survey found that 64% of Americans with a high religious commitment say “gene editing giving babies a much reduced disease risk” is “meddling with nature and crosses a line we should not cross.” Are you stunned that anyone could prefer to expose their own babies to debilitating or killer diseases when a prevention is possible?

And 65% with such a commitment have a similar opinion of “brain chip implants for much improved cognitive abilities.” Better to remain ignorant when a way to more knowledge is possible?

Obsession with inequality of abilities

When asked if “gene editing giving babies a much reduced disease risk” is an appropriate use of technology, 54% answered “Yes” if it results in people “always equally healthy as the average person.” But only 42% approved if it results in people “far healthier than any human known to date.” Similarly, 47% approved of synthetic blood if it results in physical improvements in individuals “equal to their own peak ability,” while only 28% approved if it results in improvements “far above that of any human known to date.”

Here we see the ugly side of egalitarianism. Better for everyone to be less healthy than for some to be healthier than others.

synthetic_blood-alamy_SmallThis inequality concern is another aspect of warped values we find in economic discussions. What if everyone enjoys rising levels of prosperity in a free-market system, but some individuals—Steve Jobs? Mark Zuckerberg?—become much wealthier than others through their own productive efforts? It’s win-win! But many would punish and demonize such achievers because they are the “top 1 percent,” even if such treatment means that those achievers produce less and, thus, everyone is less prosperous. Better we’re all poorer but more equal.

A disappearing digital divide

We saw this inequality concern in the 1990s when desktop PCs and the Internet were taking off. Some projected a “digital divide.” There would be more intelligent and advantaged individuals because they could access a universe of information through these technologies. And there would be those with little access who would fall further behind. Of course, what fell was the price of those technologies, which even then were accessible for free at most local libraries and now are in laptops, tablets, and smartphones, and affordable to most low-income individuals. The divide disappeared.

 Computers

There were early adopters prosperous enough to try new information technologies. Similarly, there will be early adopters of biomedical tech, which later will become accessible to all—but only if enough people value it rather than fear it and demand that the government stop it.

The fight for values

In a companion piece to the Pew survey, entitled Human Enhancement: The Scientific and Ethical Dimensions of Striving for Perfection, Pew senior writer David Masci offers a good overview of serious moral issues raised by biotech and other exponential technologies. And those of us who welcome these technologies must fight for the moral values on which they are based.

We truly value our lives, and the happiness and flourishing that we as individuals can get out of them through our own achievements. We must shake others out of their spiritual lethargy so that they too will not let their precious lives waste away.

We must promote the values of reason and science as the means to better technology and as guides for our individual lives. Misguided dogmas, whether religious or political, lead to social and personal stagnation.

We must develop and implement strategies to promote human achievement, including enhancement of our capacities, as a value in our culture through our institutions—schools, media—and our aesthetics—movies, art, music.

We must offer an exciting and compelling vision of a fantastic, nonfiction future, of a world as it can be and should be, especially to young people who thirst for a future that will be worth living.

The values on which this future is based will not sell themselves. We must not only create the technology that will allow us to live healthier, smarter and stronger. We must also create the culture that will encourage and celebrate the creation and use of such technology.

Edward Hudgins is the director of advocacy for The Atlas Society and the editor and author of several books on politics and government policy.

Copyright The Atlas Society. For more information, please visit www.atlassociety.org.

The Two Faces of Aging: Cancer and Cellular Senescence – Article by Adam Alonzi

The Two Faces of Aging: Cancer and Cellular Senescence – Article by Adam Alonzi

The New Renaissance Hat
Adam Alonzi
******************************

This article is republished with the author’s permission. It was originally posted on Radical Science News.

hELA-400x300Multiphoton fluorescence image of HeLa cells.

Aging, inflammation, cancer, and cellular senescence are all intimately interconnected. Deciphering the nature of each thread is a tremendous task, but must be done if preventative and geriatric medicine ever hope to advance. A one-dimensional analysis simply will not suffice. Without a strong understanding of the genetic, epigenetic, intercellular, and intracellular factors at work, only an incomplete picture can be formed. However, even with an incomplete picture, useful therapeutics can be and are being developed. One face is cancer, in reality a number of diseases characterized by uncontrolled cell division. The other is degradation, which causes a slue of degenerative disorders stemming from deterioration in regenerative capacity.

Now there is a new focus on making geroprotectors, which are a diverse and growing family of compounds that assist in preventing and reversing the unwanted side effects of aging. Senolytics, a subset of this broad group, accomplish this feat by encouraging the removal of decrepit cells. A few examples include dasatinib, quercetin, and ABT263. Although more research must be done, there are a precious handful of studies accessible to anyone with the inclination to scroll to the works cited section of this article. Those within the life-extension community and a few enlightened souls outside of it already know this, but it bears repeating: in the developed world all major diseases are the direct result of the aging process. Accepting this rather simple premise, and you really ought to, should stoke your enthusiasm for the first generation of anti-aging elixirs and treatments. Before diving into the details of these promising new pharmaceuticals, nanotechnology, and gene therapies we must ask what is cellular senescence? What causes it? What purpose does it serve?

Depending on the context in which it is operating, a single gene can have positive or negative effects on an organism’s phenotype. Often the gene is exerting both desirable and undesirable influences at the same time. This is called antagonistic pleiotropy. For example, high levels of testosterone can confer several reproductive advantages in youth, but in elderly men can increase their likelihood of developing prostate cancer. Cellular senescence is a protective measure; it is a response to damage that could potentially turn a healthy cell into a malignant one. Understandably, this becomes considerably more complex when one is examining multiple genes and multiple pathways. Identifying all of the players involved is difficult enough. Conboy’s famous parabiosis experiment, where a young mouse’s system revived an old ones, shows that alterations in the microenviornment, in this case identified and unidentified factors in the blood of young mice, can be very beneficial to their elders. Conversely, there is a solid body of evidence that shows senescent cells can have a bad influence on their neighbors. How can something similar be achieved in humans without having to surgically attach a senior citizen to a college freshman?

By halting its own division, a senescent cell removes itself as an immediate tumorigenic threat. Yet the accumulation of nondividing cells is implicated in a host of pathologies, including, somewhat paradoxically, cancer, which, as any life actuary’s mortality table will show, is yet another bedfellow of the second half of life. The single greatest risk factor for developing cancer is age. The Hayflick Limit is well known to most people who have ever excitedly watched the drama of a freshly inoculated petri dish. After exhausting their telomeres, cells stop dividing. Hayflick et al. astutely noted that “the [cessation of cell growth] in culture may be related to senescence in vivo.” Although cellular senescnece is considered irreversible, a select few cells can resume normal growth after the inactivation of the p53 tumor suppressor. The removal of p16, a related gene, resulted in the elimination of the progeroid phenotype in mice. There are several important p’s at play here, but two are enough for now.

Our bodies are bombarded by insults to their resilient but woefully vincible microscopic machinery. Oxidative stress, DNA damage, telomeric dysfunction, carcinogens, assorted mutations from assorted causes, necessary or unnecessary immunological responses to internal or external factors, all take their toll. In response cells may repair themselves, they may activate an apoptotic pathway to kill themselves, or just stop proliferating. After suffering these slings and arrows, p53 is activated. Not surprisingly, mice carrying a hyperactive form of p53 display high levels of cellular senescence. To quote Campisi, abnormalities in p53 and p15 are found in “most, if not all, cancers.” Knocking p53 out altogether produced mice unusually free of tumors, but those mice find themselves prematurely past their prime. There is a clear trade-off here.

In a later experiment Garcia-Cao modified p53 to only express itself when activated. The mice exhibited normal longevity as well as an“unusual resistance to cancer.” Though it may seem so, these two cellular states are most certainly not opposing fates. As it is with oxidative stress and nutrient sensing, two other components of senescence or lack thereof, the goal is not to increase or decrease one side disproportionately, but to find the correct balance between many competing entities to maintain healthy homeostasis. As mentioned earlier, telomeres play an important role in geroconversion, the transformation of quiescent cells into senescent ones. Meta-analyses have shown a strong relationship between short telomeres and mortality risk, especially in younger people. Although cancer cells activate telomerase to overcome the Hayflick Limit, it is not entirely certain if the activation of telomerase is oncogenic.

majormouse

SASP (senescence-associated secretory phenotype) is associated with chronic inflammation, which itself is implicated in a growing list of common infirmities. Many SASP factors are known to stimulate phenotypes similar to those displayed by aggressive cancer cells. The simultaneous injection of senescent fibroblasts with premalignant epithelial cells into mice results in malignancy. On the other hand, senescent human melanocytes secrete a protein that induces replicative arrest in a fair percentage of melanoma cells. In all experiments tissue types must be taken into account, of course. Some of the hallmarks of inflammation are elevated levels of IL-6, IL-8, and TNF-α. Inflammatory oxidative damage is carcinogenic and an inflammatory microenvironment is a good breeding ground for malignancies.

Caloric restriction extends lifespan in part by inhibiting TOR/mTOR (target of rapamycin/mechanistic target of rapamycin, also called  the mammalian target of rapamycin). TOR is a sort of metabolic manager, it receives inputs regarding the availability of nutrients and stress levels and then acts accordingly. Metformin is also a TOR inhibitor, which is why it is being investigated as a cancer shield and a longevity aid. Rapamycin has extended average lifespans in all species tested thus far and reduces geroconversion. It also restores the self-renewal and differentiation capacities of haemopoietic stem cells. For these reasons the Major Mouse Testing Program is using rapamycin as its positive control. mTOR and p53 dance (or battle) with each other beautifully in what Hasty calls the “Clash of the Gods.” While p53 inhibits mTOR1 activity, mTOR1 increases p53 activity. Since neither metformin nor rapamycin are without their share of unwanted side effects, more senolytics must be explored in greater detail.

Starting with a simple premise, namely that senescent cells rely on anti-apoptotic and pro-survival defenses more than their actively replicating counterparts, Campisi and her colleagues created a series of experiments to find the “Achilles’ Heel” of senescent cells. After comparing the two different cell states, they designed senolytic siRNAs. 39 transcripts were selected for knockdown by siRNA transfection, and 17 affected the viability of their target more than healthy cells. Dasatinib, a cancer drug, and quercitin, a common flavonoid found in common foods, have senolytic properties. The former has a proven proclivity for fat-cell progenitors, and the latter is more effective against endothelial cells. Delivered together, they they remove senescent mouse embryonic fibroblasts. Administration into elderly mice resulted in favorable changes in SA-BetaGAL (a molecule closely associated with SASP) and reduced p16 RNA. Single doses of D+Q together resulted in significant improvements in progeroid mice.

If you are not titillated yet, please embark on your own journey through the gallery of encroaching options for those who would prefer not to become chronically ill, suffer immensely, and, of course, die miserably in a hospital bed soaked with several types of their own excretions―presumably, hopefully, those who claim to be unafraid of death have never seen this image or naively assume they will never be the star of such a dismal and lamentably “normal” final act. There is nothing vain about wanting to avoid all the complications that come with time. This research is quickly becoming an economic and humanitarian necessity. The trailblazers who move this research forward will not only find wealth at the end of their path, but the undying gratitude of all life on earth.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels “A Plank in Reason” and “Praying for Death: Mocking the Apocalypse”. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

References

Blagosklonny, M. V. (2013). Rapamycin extends life-and health span because it slows aging. Aging (Albany NY), 5(8), 592.

Campisi, Judith, and Fabrizio d’Adda di Fagagna. “Cellular senescence: when bad things happen to good cells.” Nature reviews Molecular cell biology 8.9 (2007): 729-740.

Campisi, Judith. “Aging, cellular senescence, and cancer.” Annual review of physiology 75 (2013): 685.

Hasty, Paul, et al. “mTORC1 and p53: clash of the gods?.” Cell Cycle 12.1 (2013): 20-25.

Kirkland, James L. “Translating advances from the basic biology of aging into clinical application.” Experimental gerontology 48.1 (2013): 1-5.

Lamming, Dudley W., et al. “Rapamycin-induced insulin resistance is mediated by mTORC2 loss and uncoupled from longevity.” Science 335.6076 (2012): 1638-1643.

LaPak, Kyle M., and Christin E. Burd. “The molecular balancing act of p16INK4a in cancer and aging.” Molecular Cancer Research 12.2 (2014): 167-183.

Malavolta, Marco, et al. “Pleiotropic effects of tocotrienols and quercetin on cellular senescence: introducing the perspective of senolytic effects of phytochemicals.” Current drug targets (2015).

Rodier, Francis, Judith Campisi, and Dipa Bhaumik. “Two faces of p53: aging and tumor suppression.” Nucleic acids research 35.22 (2007): 7475-7484.

Rodier, Francis, and Judith Campisi. “Four faces of cellular senescence.” The Journal of cell biology 192.4 (2011): 547-556.

Salama, Rafik, et al. “Cellular senescence and its effector programs.” Genes & development 28.2 (2014): 99-114.

Tchkonia, Tamara, et al. “Cellular senescence and the senescent secretory phenotype: therapeutic opportunities.” The Journal of clinical investigation 123.123 (3) (2013): 966-972.

Zhu, Yi, et al. “The Achilles’ heel of senescent cells: from transcriptome to senolytic drugs.” Aging cell (2015).

 

Happy Future Day! – Article by Edward Hudgins

Happy Future Day! – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
******************************

Stand up for optimism about the future today!

Transhumanism Australia, a non-profit that promotes education in science and technology, has marked March 1 as “Future Day.” It wants this day celebrated worldwide as a time “to consider the future of humanity.” If all of us made a habit of celebrating our potential, it could transform a global culture mired in pessimism and malaise. It would help build an optimistic world that is confident about what humans can accomplish if we put our minds and imaginations to it.

The Future is Bright

The information and communications technology that helps define and shape our world was, 40 years ago, a vision of the future brought into present reality by visionaries like Steve Jobs and Bill Gates. The exponential growth of the power of semiconductors allowed entrepreneurs to create one new industry and cutting-edge good product and service after another.

futureToday, we are at exponential takeoff points in biotech, nanotech, and artificial intelligence. For example, the cost of sequencing a human genome was $100 million in 2001, $10 million in 2007, but it costs only a few thousand dollars today. Steve Jobs created the first Apple computers in his garage. Biohackers similarly housed could transform our lives in the future in ways that still seem to most folks like science fiction; indeed, the prospect of “curing death” is no longer a delusion of madmen but the well-funded research projects in the laboratories of the present.

For a prosperous present and promising future a society needs physical infrastructure—roads, power, communications. It needs a legal infrastructure—laws and political structures that protect the liberty of individuals so they can act freely and flourish in civil society. And it requires moral infrastructure, a culture that promotes the values of reason and individual productive achievement.

Future “Future Days”

We should congratulate our brothers “Down Under” for conceiving of Future Day. They have celebrated it in Sydney with a conference on the science that will produce a bright tomorrow. We in America and folks around the world should build on this idea. Today it’s a neat idea: next year, we could start a powerful tradition, a global Future or Human Achievement Day, promoting the bright future that could be.

Were such a day marked in every school and every media outlet, it could to raise achiever consciousness. It could celebrate achievement in the culture—who invented everything that makes up our world today, and how? It could promote achievement as a central value in the life of each individual, whether the individual is nurturing a child to maturity or a business to profitability, writing a song, poem, business plan or dissertation, laying the bricks to a building or designing it, or arranging for its financing.

Such a day would help create the moral infrastructure necessary for a prosperous, fantastic, non-fiction future, a world as it can be and should be, a world created by humans for humans—or even transhumans!

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright The Atlas Society. For more information, please visit www.atlassociety.org.

G. Stolyarov II Interviews Adam Alonzi Regarding Emerging Biotech Research

G. Stolyarov II Interviews Adam Alonzi Regarding Emerging Biotech Research

The New Renaissance Hat
G. Stolyarov II and Adam Alonzi
August 28, 2015
******************************

On August 23, 2015, Mr. Stolyarov invited Adam Alonzi to share the insights he was able to glean from many recent discussions with biotechnology entrepreneurs and researchers. Mr. Alonzi was able to subsequently edit some of his remarks to remedy the technical issues with audio quality during the broadcast. His edited version appears below.

See the original interview recording:

Adam Alonzi is the author of Praying for Death and A Plank in Reason. He is also a futurist, inventor, DIY enthusiast, biotechnologist, programmer, molecular gastronomist, consummate dilletante and columnist at The Indian Economist. Read his blog here. Listen to his podcasts at http://adamalonzi.libsyn.com.

Adam Alonzi’s Interviews
Elizabeth Parrish – BioViva
Alex Zhavoroknov – InSilico Medicine
Maria Konovalenko – Science for Life Extension Foundation and Longevity Cookbook
Luis Arana – Robots Without Borders

Our Cells Will Be Guided and Protected by Machines – Article by Reason

Our Cells Will Be Guided and Protected by Machines – Article by Reason

The New Renaissance Hat
Reason
September 21, 2014
******************************

A gulf presently lies between the nanoscale engineering of materials science on the one hand and the manipulation and understanding of evolved biological machinery on the other. In time that gulf will close: future industries will be capable of producing and controlling entirely artificial machines that integrate with, enhance, or replace our natural biological machines. Meanwhile biologists will be manufacturing ever more artificial and enhanced versions of cellular components, finding ways to make them better: evolution has rarely produced the best design possible for any given circumstance. Both sides will work towards one another and eventually meet in the middle.

Insofar as aging goes, a process of accumulating damage and malfunction in our biology, it is likely that this will first be successfully addressed and brought under medical control by producing various clearly envisaged ways to repair and maintain our cells just as they are: remove the damage, restore youthful function, and repeat as necessary. We stand much closer to that goal than the far more ambitious undertaking of building a better, more resilient, more easily repaired cell – a biology 2.0 if you like. That will happen, however. Our near descendants will be as much artificial as natural, and more capable and healthier for it.

The introduction of machinery to form a new human biology won’t happen all at once, however, and it isn’t entirely a far future prospect. There will be early gains and prototypes, the insertion of simpler types of machine into our cells for specific narrow purposes: sequestering specific proteins or wastes, or as drug factories to produce a compound in response to circumstances, or any one of a number of other similar tasks. If you want to consider nanoparticles or engineered assemblies of proteins capable of simple decision tree operations as machines then this has already happened in the lab:

Researchers Make Important Step Towards Creating Medical Nanorobots

Quote:

Researchers [have] have made an important step towards creating medical nanorobots. They discovered a way of enabling nano- and microparticles to produce logical calculations using a variety of biochemical reactions. Many scientists believe logical operations inside cells or in artificial biomolecular systems to be a way of controlling biological processes and creating full-fledged micro-and nano-robots, which can, for example, deliver drugs on schedule to those tissues where they are needed.

Further, there is a whole branch of cell research that involves finding ways to safely introduce ever larger objects into living cells, such as micrometer-scale constructs. In an age in which the state of the art for engineering computational devices is the creation of 14 nanometer features, there is a lot that might be accomplished in the years ahead with the space contained within a 1000 nanometer diameter sphere.

Introducing Micrometer-Sized Artificial Objects into Live Cells: A Method for Cell-Giant Unilamellar Vesicle Electrofusion

Quote:

Direct introduction of functional objects into living cells is a major topic in biology, medicine, and engineering studies, since such techniques facilitate manipulation of cells and allows one to change their functional properties arbitrarily. In order to introduce various objects into cells, several methods have been developed, for example, endocytosis and macropinocytosis. Nonetheless, the sizes of introducible objects are largely limited: up to several hundred nanometers and a few micrometers in diameter. In addition, the uptake of objects is dependent on cell type, and neither endocytosis nor macropinocytosis occur, for example, in lymphocytes. Even after successful endocytosis, incorporated objects are transported to the endosomes; they are then eventually transferred to the lysosome, in which acidic hydrolases degrade the materials. Hence, these two systems are not particularly suitable for introduction of functionally active molecules and objects.To overcome these obstacles, novel delivery systems have been contrived, such as cationic liposomes and nanomicelles, that are used for gene transfer; yet, only nucleic acids that are limited to a few hundred nanometers in size can be introduced. By employing peptide vectors, comparatively larger materials can be introduced into cells, although the size limit of peptides and beads is approximately 50nm, which is again insufficient for delivery of objects, such as DNA origami and larger functional beads.

Here, we report a method for introducing large objects of up to a micrometer in diameter into cultured mammalian cells by electrofusion of giant unilamellar vesicles (GUVs). We prepared GUVs containing various artificial objects using a water-in-oil emulsion centrifugation method. GUVs and dispersed HeLa cells were exposed to an alternating current (AC) field to induce a linear cell-GUV alignment, and then a direct current (DC) pulse was applied to facilitate transient electrofusion.

With uniformly sized fluorescent beads as size indexes, we successfully and efficiently introduced beads of 1 µm in diameter into living cells along with a plasmid mammalian expression vector. Our electrofusion did not affect cell viability. After the electrofusion, cells proliferated normally until confluence was reached, and the introduced fluorescent beads were inherited during cell division. Analysis by both confocal microscopy and flow cytometry supported these findings. As an alternative approach, we also introduced a designed nanostructure (DNA origami) into live cells. The results we report here represent a milestone for designing artificial symbiosis of functionally active objects (such as micro-machines) in living cells. Moreover, our technique can be used for drug delivery, tissue engineering, and cell manipulation.

Cell machinery will be a burgeoning medical industry of the 2030s, I imagine. To my eyes the greatest challenge in all of this is less the mass production of useful machines per se, and more the coordination and control of a body full of tens of trillions of such machines, perhaps from varied manufacturers, introduced for different goals, and over timescales long in comparison to business cycles and technological progress. That isn’t insurmountable, but it sounds like a much harder problem than those inherent in designing these machines and demonstrating them to be useful in cell cultures. It is a challenge on a scale of complexity that exceeds that of managing our present global communications network by many orders of magnitude. If you’ve been wondering what exactly it is we’ll be doing with the vast computational power available to us in the decades ahead, given that this metric continues to double every 18 months or so, here is one candidate.

Reason is the founder of The Longevity Meme (now Fight Aging!). He saw the need for The Longevity Meme in late 2000, after spending a number of years searching for the most useful contribution he could make to the future of healthy life extension. When not advancing the Longevity Meme or Fight Aging!, Reason works as a technologist in a variety of industries. 
***

This work is reproduced here in accord with a Creative Commons Attribution license. It was originally published on FightAging.org.

Technological Singularities: An Overview – Video by G. Stolyarov II

Technological Singularities: An Overview – Video by G. Stolyarov II

Mr. Stolyarov explains the basic concept of a technological Singularity and his understanding that humankind has already experienced three such Singularities in the form of the Agricultural, Industrial, and Information Revolutions. The next Singularity will come about due to a convergence of technologies such as artificial intelligence, nanotechnology, and biotechnology (including indefinite life extension).

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

Transhumanism and Mind Uploading Are Not the Same – Video by G. Stolyarov II

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer'” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Mr. Stolyarov refutes Adams’s equation of transhumanism with destructive mind uploading and explains that advocacy of mind uploading is neither a necessary nor a sufficient component of transhumanism.

References
– “Transhumanism and Mind Uploading Are Not the Same” – Essay by G. Stolyarov II
– “Transhumanism debunked: Why drinking the Kurzweil Kool-Aid will only make you dead, not immortal” – Mike Adams – NaturalNews.com – June 25, 2013
SENS Research Foundation
– “Nanomedicine” – Wikipedia
– “Transhumanism: Towards a Futurist Philosophy” – Essay by Max More
2045 Initiative Website
Bebionic Website
– “How Can I Live Forever?: What Does and Does Not Preserve the Self” – Essay by G. Stolyarov II
– “Immortality: Bio or Techno?” – Essay by Franco Cortese

Transhumanism and Mind Uploading Are Not the Same – Article by G. Stolyarov II

Transhumanism and Mind Uploading Are Not the Same – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 10, 2013
******************************

In what is perhaps the most absurd attack on transhumanism to date, Mike Adams of NaturalNews.com equates this broad philosophy and movement with “the entire idea that you can ‘upload your mind to a computer’” and further posits that the only kind of possible mind uploading is the destructive kind, where the original, biological organism ceases to exist. Adams goes so far as calling transhumanism a “death cult much like the infamous Heaven’s Gate cult led by Marshal Applewhite.”

I will not devote this essay to refuting any of Adams’s arguments against destructive mind uploading, because no serious transhumanist thinker of whom I am aware endorses the kind of procedure Adams uses as a straw man. For anyone who wishes to continue existing as an individual, uploading the contents of the mind to a computer and then killing the body is perhaps the most bizarrely counterproductive possible activity, short of old-fashioned suicide. Instead, Adams’s article – all the misrepresentations aside – offers the opportunity to make important distinctions of value to transhumanists.

First, having a positive view of mind uploading is neither necessary nor sufficient for being a transhumanist. Mind uploading has been posited as one of several routes toward indefinite human life extension. Other routes include the periodic repair of the existing biological organism (as outlined in Aubrey de Grey’s SENS project or as entailed in the concept of nanomedicine) and the augmentation of the biological organism with non-biological components (Ray Kurzweil’s actual view, as opposed to the absurd positions Adams attributes to him). Transhumanism, as a philosophy and a movement, embraces the lifting of the present limitations upon the human condition – limitations that arise out of the failures of human biology and unaltered physical nature. Max More, in “Transhumanism: Towards a Futurist Philosophy”, writes that “Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies such as neuroscience and neuropharmacology, life extension, nanotechnology, artificial ultraintelligence, and space habitation, combined with a rational philosophy and value system.” That Adams would take this immensity of interrelated concepts, techniques, and aspirations and equate it to destructive mind uploading is, plainly put, mind-boggling. There is ample room in transhumanism for a variety of approaches toward lifting the limitations of the human condition. Some of these approaches will be more successful than others, and no one approach is obligatory for those wishing to consider themselves transhumanists.

Moreover, Adams greatly misconstrues the positions of those transhumanists who do support mind uploading. For most such transhumanists, a digital existence is not seen as superior to their current biological existences, but as rather a necessary recourse if or when it becomes impossible to continue maintaining a biological existence. Dmitry Itskov’s 2045 Initiative is perhaps the most prominent example of the pursuit of mind uploading today. The aim of the initiative is to achieve cybernetic immortality in a stepwise fashion, through the creation of a sequence of avatars that gives the biological human an increasing amount of control over non-biological components. Avatar B, planned for circa 2020-2025, would involve a human brain controlling an artificial body. If successful, this avatar would prolong the existence of the biological brain when other components of the biological body have become too irreversibly damaged to support it. Avatar C, planned for circa 2030-2035, would involve the transfer of a human mind from a biological to a cybernetic brain, after the biological brain is no longer able to support life processes. There is no destruction intended in the 2045 Avatar Project Milestones, only preservation of some manner of intelligent functioning of a person whom the status quo would instead relegate to becoming food for worms. The choice between decomposition and any kind of avatar is a no-brainer (well, a brainer actually, for those who choose the latter).

Is Itskov’s path toward immortality the best one? I personally prefer SENS, combined with nanomedicine and piecewise artificial augmentations of the sort that are already beginning to occur (witness the amazing bebionic3 prosthetic hand). Itskov’s approach appears to assume that the technology for transferring the human mind to an entirely non-biological body will become available sooner than the technology for incrementally maintaining and fortifying the biological body to enable its indefinite continuation. My estimation is the reverse. Before scientists will be able to reverse-engineer not just the outward functions of a human brain but also its immensely complex and intricate internal structure, we will have within our grasp the ability to conquer an ever greater number of perils that befall the biological body and to repair the body using both biological and non-biological components.

The biggest hurdle for mind uploading to overcome is one that does not arise with the approach of maintaining the existing body and incrementally replacing defective components. This hurdle is the preservation of the individual’s unique and irreplaceable vantage point upon the world – his or her direct sense of being that person and no other. I term this direct vantage point an individual’s “I-ness”.  Franco Cortese, in his immensely rigorous and detailed conceptual writings on the subject, calls it “subjective-continuity” and devotes his attention to techniques that could achieve gradual replacement of biological neurons with artificial neurons in such a way that there is never a temporal or operational disconnect between the biological mind and its later cybernetic instantiation. Could the project of mind uploading pursue directions that would achieve the preservation of the “I-ness” of the biological person? I think this may be possible, but only if the resulting cybernetic mind is structurally analogous to the biological mind and, furthermore, maintains the temporal continuity of processes exhibited by an analog system, as opposed to a digital system’s discrete “on-off” states and the inability to perform multiple exactly simultaneous operations. Furthermore, only by developing the gradual-replacement approaches explored by Cortese could this prospect of continuing the same subjective experience (as opposed to simply creating a copy of the individual) be realized. But Adams, in his screed against mind uploading, seems to ignore all of these distinctions and explorations. Indeed, he appears to be oblivious of the fact that, yes, transhumanists have thought quite a bit about the philosophical questions involved in mind uploading. He seems to think that in mind uploading, you simply “copy the brain and paste it somewhere else” and hope that “somehow magically that other thing becomes ‘you.’” Again, no serious proponent of mind uploading – and, more generally, no serious thinker who has considered the subject – would hold this misconception.

Adams is wrong on a still further level, though. Not only is he wrong to equate transhumanism with mind uploading; not only is he wrong to declare all mind uploading to be destructive – he is also wrong to condemn the type of procedure that would simply make a non-destructive copy of an individual. This type of “backup” creation has indeed been advocated by transhumanists such as Ray Kurzweil. While a pure copy of one’s mind or its contents would not transfer one’s “I-ness” to a digital substrate and would not enable one to continue experiencing existence after a fatal illness or accident, it could definitely help an individual regain his memories in the event of brain damage or amnesia. Furthermore, if the biological individual were to irreversibly perish, such a copy would at least preserve vital information about the biological individual for the benefit of others. Furthermore, it could enable the biological individual’s influence upon the world to be more powerfully actualized by a copy that considers itself to have the biological individual’s memories, background, knowledge, and personality.  If we had with us today copies of the minds of Archimedes, Benjamin Franklin, and Nikola Tesla, we would certainly all benefit greatly from continued outpourings of technological and philosophical innovation.  The original geniuses would not know or care about this, since they would still be dead, but we, in our interactions with minds very much like theirs, would be immensely better off than we are with only their writings and past inventions at our disposal.

Yes, destructive digital copying of a mind would be a bafflingly absurd and morally troubling undertaking – but recognition of this is neither a criticism of transhumanism nor of any genuinely promising projects of mind uploading. Instead, it is simply a matter of common sense, a quality which Mike Adams would do well to acquire.

Neuronal “Scanning” and NRU Integration – Article by Franco Cortese

Neuronal “Scanning” and NRU Integration – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 23, 2013
******************************
This essay is the seventh chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first six chapters were previously published on The Rational Argumentator under the following titles:
***

I was planning on using the NEMS already conceptually developed by Robert Freitas for nanosurgery applications (to be supplemented by the use of MEMS if the technological infrastructure was unavailable at the time) to take in vivo recordings of the salient neural metrics and properties needing to be replicated. One novel approach was to design the units with elongated, worm-like bodies, disposing the computational and electromechanical apparatus within the elongated body of the unit. This sacrifices width for length so as to allow the units to fit inside the extra-cellular space between neurons and glial cells as a postulated solution to a lack of sufficient miniaturization. Moreover, if a unit is too large to be used in this way, extending its length by the same proportion would allow it to then operate in the extracellular space, provided that its means of data-measurement itself weren’t so large as to fail to fit inside the extracellular space (the span of ECF between two adjacent neurons for much of the brain is around 200 Angstroms).

I was planning on using the chemical and electrical sensing methodologies already in development for nanosurgery as the technological and methodological infrastructure for the neuronal data-measurement methodology. However, I also explored my own conceptual approaches to data-measurement. This consisted of detecting variation of morphological features in particular, as the schemes for electrical and chemical sensing already extant seemed either sufficiently developed or to be receiving sufficient developmental support and/or funding. One was the use of laser-scanning or more generally radiography (i.e., sonar) to measure and record morphological data. Another was a device that uses a 2D array of depressible members (e.g., solid members attached to a spring or ratchet assembly, which is operatively connected to a means of detecting how much each individual member is depressed—such as but not limited to piezoelectric crystals that produce electricity in response and proportion to applied mechanical strain). The device would be run along the neuronal membrane and the topology of the membrane would be subsequently recorded by the pattern of depression recordings, which are then integrated to provide a topographic map of the neuron (e.g., relative location of integral membrane components to determine morphology—and magnitude of depression to determine emergent topology). This approach could also potentially be used to identify the integral membrane proteins, rather than using electrical or chemical sensing techniques, if the topologies of the respective proteins are sufficiently different as to be detectable by the unit (determined by its degree of precision, which typically is a function of its degree of miniaturization).

The constructional and data-measurement units would also rely on the technological and methodological infrastructure for organization and locomotion that would be used in normative nanosurgery. I conceptually explored such techniques as the use of a propeller, the use of pressure-based methods (i.e., a stream of water acting as jet exhaust would in a rocket), the use of artificial cilia, and the use of tracks that the unit attaches to so as to be moved electromechanically, which decreases computational intensiveness – a measure of required computation per unit time – rather than having a unit compute its relative location so as to perform obstacle-avoidance and not, say, damage in-place biological neurons. Obstacle-avoidance and related concerns are instead negated through the use of tracks that limit the unit’s degrees of freedom—thus preventing it from having to incorporate computational techniques of obstacle-avoidance (and their entailed sensing apparatus). This also decreases the necessary precision (and thus, presumably, the required degree of miniaturization) of the means of locomotion, which would need to be much greater if the unit were to perform real-time obstacle avoidance. Such tracks would be constructed in iterative fashion. The constructional system would analyze the space in front of it to determine if the space was occupied by a neuron terminal or soma, and extrude the tracks iteratively (e.g., add a segment in spaces where it detects the absence of biological material). It would then move along the newly extruded track, progressively extending it through the spaces between neurons as it moves forward.

Non-Distortional in vivo Brain “Scanning”

A novel avenue of enquiry that occurred during this period involves counteracting or taking into account the distortions caused by the data-measurement units on the elements or properties they are measuring and subsequently applying such corrections to the recording data. A unit changes the local environment that it is supposed to be measuring and recording, which becomes problematic. My solution was to test which operations performed by the units have the potential to distort relevant attributes of the neuron or its environment and to build units that compensate for it either physically or computationally.

If we reduce how a recording unit’s operation distorts neuronal behavior into a list of mathematical rules, we can take the recordings and apply mathematical techniques to eliminate or “cancel out” those distortions post-measurement, thus arriving at what would have been the correct data. This approach would work only if the distortions are affecting the recorded data (i.e., changing it in predictable ways), and not if they are affecting the unit’s ability to actually access, measure, or resolve such data.

The second approach applies the method underlying the first approach to the physical environment of the neuron. A unit senses and records the constituents of the area of space immediately adjacent to its edges and mathematically models that “layer”; i.e., if it is meant to detect ionic solutions (in the case of ECF or ICF), then it would measure their concentration and subsequently model ionic diffusion for that layer. It then moves forward, encountering another adjacent “layer” and integrating it with its extant model. By being able to sense iteratively what is immediately adjacent to it, it can model the space it occupies as it travels through that space. It then uses electric or chemical stores to manipulate the electrical and chemical properties of the environment immediately adjacent to its surface, so as to produce the emergent effects of that model (i.e., the properties of the edges of that model and how such properties causally affect/impact adjacent sections of the environment), thus producing the emergent effects that would have been present if the NRU-construction/integration system or data-measuring system hadn’t occupied that space.

The third postulated solution was the use of a grid comprised of a series of hollow recesses placed in front of the sensing/measuring apparatus. The grid is impressed upon the surface of the membrane. Each compartment isolates a given section of the neuronal membrane from the rest. The constituents of each compartment are measured and recorded, most probably via uptake of its constituents and transport to a suitable measuring apparatus. A simple indexing system can keep track of which constituents came from which grid (and thus which region of the membrane they came from). The unit has a chemical store operatively connected to the means of locomotion used to transport the isolated membrane-constituents to the measuring/sensing apparatus. After a given compartment’s constituents are measured and recorded, the system then marks its constituents (determined by measurement and already stored as recordings by this point of the process), takes an equivalent molecule or compound from a chemical inventory, and replaces the substance it removed for measurement with the equivalent substance from its chemical inventory. Once this is accomplished for a given section of membrane, the grid then moves forward, farther into the membrane, leaving the replacement molecules/compounds from the biochemical inventory in the same respective spots as their original counterparts. It does this iteratively, making its way through a neuron and out the other side. This approach is the most speculative, and thus the least likely to be used. It would likely require the use of NEMS, rather than MEMS, as a necessary technological infrastructure, if the approach were to avoid becoming economically prohibitive, because in order for the compartment-constituents to be replaceable after measurement via chemical store, they need to be simple molecules and compounds rather than sections of emergent protein or tissue, which are comparatively harder to artificially synthesize and store in working order.

***

In the next chapter I describe the work done throughout late 2009 on biological/non-biological NRU hybrids, and in early 2010 on one of two new approaches to retaining subjective-continuity through a gradual replacement procedure, both of which are unrelated to concerns of graduality or sufficient functional equivalence between the biological original and the artificial replication-unit.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf