Browsed by
Month: June 2013

Government Spying: Should We Be Shocked? – Article by Ron Paul

Government Spying: Should We Be Shocked? – Article by Ron Paul

The New Renaissance Hat
Ron Paul
June 9, 2013
******************************

Last week we saw dramatic new evidence of illegal government surveillance of our telephone calls, and of the National Security Agency’s deep penetration into American companies such as Facebook and Microsoft to spy on us. The media seemed shocked.

Many of us are not so surprised.

Some of us were arguing back in 2001 with the introduction of the so-called PATRIOT Act that it would pave the way for massive US government surveillance—not targeting terrorists but rather aimed against American citizens. We were told we must accept this temporary measure to provide government the tools to catch those responsible for 9/11. That was nearly twelve years and at least four wars ago.

We should know by now that when it comes to government power-grabs, we never go back to the status quo even when the “crisis” has passed. That part of our freedom and civil liberties once lost is never regained. How many times did the PATRIOT Act need renewed? How many times did FISA authority need expanding? Why did we have to pass a law to grant immunity to companies who hand over our personal information to the government?

It was all a build-up of the government’s capacity to monitor us.

The reaction of some in Congress and the Administration to last week’s leak was predictable. Knee-jerk defenders of the police state such as Senator Lindsey Graham declared that he was “glad” the government was collecting Verizon phone records—including his own—because the government needs to know what the enemy is up to. Those who take an oath to defend the Constitution from its enemies both foreign and domestic should worry about such statements.

House Intelligence Committee Chairman Mike Rogers tells us of the tremendous benefits of this Big Brother-like program. He promises us that domestic terrorism plots were thwarted, but he cannot tell us about them because they are classified. I am a bit skeptical, however. In April, the New York Times reported that most of these domestic plots were actually elaborate sting operations developed and pushed by the FBI. According to the Times report, “of the 22 most frightening plans for attacks since 9/11 on American soil, 14 were developed in sting operations.”

Even if Chairman Rogers is right, though, and the program caught someone up to no good, we have to ask ourselves whether even such a result justifies trashing the Constitution. Here is what I said on the floor of the House when the PATRIOT Act was up for renewal back in 2011:

“If you want to be perfectly safe from child abuse and wife beating, the government could put a camera in every one of our houses and our bedrooms, and maybe there would be somebody made safer this way, but what would you be giving up? Perfect safety is not the purpose of government. What we want from government is to enforce the law to protect our liberties.”

What most undermines the claims of the Administration and its defenders about this surveillance program is the process itself. First the government listens in on all of our telephone calls without a warrant and then if it finds something it goes to a FISA court and get an illegal approval for what it has already done! This turns the rule of law and due process on its head.

The government does not need to know more about what we are doing. We need to know more about what the government is doing. We need to turn the cameras on the police and on the government, not the other way around. We should be thankful for writers like Glenn Greenwald, who broke last week’s story, for taking risks to let us know what the government is doing. There are calls for the persecution of Greenwald and the other whistle-blowers and reporters. They should be defended, as their work defends our freedom.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission.

Against Monsanto, For GMOs – Article by G. Stolyarov II

Against Monsanto, For GMOs – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
June 9, 2013
******************************

                The depredations of the multinational agricultural corporation Monsanto are rightly condemned by many. Monsanto is a prominent example of a crony corporation – a company that bolsters its market dominance not through honest competition and innovation, but through the persistent use of the political and legal system to enforce its preferences against its competitors and customers. Most outrageous is Monsanto’s stretching of patents beyond all conceivable limits – attempting to patent genes and life forms and to forcibly destroy the crops of farmers who replant seeds from crops originally obtained from Monsanto.

                Yet because Monsanto is one of the world’s leading producers of genetically modified crops, campaigners who oppose all genetically modified organisms (GMOs) often use Monsanto as the poster child for the problems with GMOs as a whole. The March Against Monsanto, which took place in cities worldwide in late May of 2013, is the most recent prominent example of this conflation. The blanket condemnation of GMOs because of Monsanto’s misbehavior is deeply fallacious. The policy of a particular company does not serve to discredit an entire class of products, just because that company produces those products – even if it could be granted that the company’s actions result in its own products being more harmful than they would otherwise be.

                GMOs, in conventional usage, are any life forms which have been altered through techniques more advanced than the kind of selective breeding which has existed for millennia. In fact, the only material distinction between genetic engineering and selective breeding is in the degree to which the procedure is targeted toward specific features of an organism. Whereas selective breeding is largely based on observation of the organism’s phenotype, genetic engineering relies on more precise manipulation of the organism’s DNA. Because of its ability to more closely focus on specific desirable or undesirable attributes, genetic engineering is less subject to unintended consequences than a solely macroscopic approach. Issues of a particular company’s abuse of the political system and its attempts to render the patent system ever more draconian do not constitute an argument against GMOs or the techniques used to create them.

                Consider that Monsanto’s behavior is not unique; similar depredations are found throughout the status quo of crony corporatism, where many large firms thrive not on the basis of merit, but on the basis of political pull and institutionalized coercion. Walt Disney Corporation has made similar outrageous (and successful) attempts to extend the intellectual-property system solely for its own benefit. The 1998 Copyright Term Extension Act was primarily motivated by Disney’s lobbying to prevent the character of Mickey Mouse from entering the public domain. Yet are all films, and all animated characters, evil or wrong because of Disney’s manipulation of the legal system instead of competing fairly and honestly on the market? Surely, to condemn films on the basis of Disney’s behavior would be absurd.

                Consider, likewise, Apple Corporation, which has attempted to sue its competitors’ products out of existence and to patent the rectangle with rounded corners – a geometric shape which is no less basic an idea in mathematics than a trapezoid or an octagon. Are all smartphones, tablet computers, MP3 players, and online music services – including those of Apple’s competitors – wrong and evil solely because of Apple’s unethical use of the legal system to squelch competition? Surely not! EA Games, until May 2013, embedded crushingly restrictive digital-rights management (DRM) into its products, requiring a continuous Internet connection (and de facto continual monitoring of the user by EA) for some games to be playable at all. Are all computer games and video games evil and wrong because of EA’s intrusive anti-consumer practices? Should they all be banned in favor of only those games that use pre-1950s-era technology – e.g., board games and other table-top games? If the reader does not support the wholesale abolition, or even the limitation, of films, consumer electronics, and games as a result of the misbehavior of prominent makers of these products, then what rationale can there possibly be for viewing GMOs differently?

                Indeed, the loathing of all GMOs stems from a more fundamental fallacy, for which any criticism of Monsanto only provides convenient cover. That fallacy is the assumption that “the natural” – i.e., anything not affected by human technology, or, more realistically, human technology of sufficiently recent origin – is somehow optimal for human purposes or simply for its own sake. While it is logically conceivable that some genetic modifications to organisms could render them more harmful than they would otherwise be (though there has never been any evidence of such harms arising despite the trillions of servings of genetically modified foods consumed to date), the condemnation of all genetic modifications using techniques from the last 60 years is far more sweeping than this. Such condemnation is not and cannot be scientific; rather, it is an outgrowth of the indiscriminate anti-technology agenda of the anti-GMO campaigners. A scientific approach, based on experimentation, empirical observation, and the immense knowledge thus far amassed regarding chemistry and biology, might conceivably give rise to a sophisticated classification of GMOs based on gradations of safety, safe uses, unsafe uses, and possible yet-unknown risks. The anti-GMO campaigners’ approach, on the other hand, can simply be summarized as “Nature good – human technology bad” – not scientific or discerning at all.

                The reverence for purportedly unaltered “nature” completely ignores the vicious, cruel, appallingly wasteful (not even to mention suboptimal) conditions of any environment untouched by human influence. After all, 99.9% of all species that ever existed are extinct – the vast majority from causes that arose long before human beings evolved. The plants and animals that primitive hunter-gatherers consumed did not evolve with the intention of providing optimal nutrition for man; they simply happened to be around, attainable for humans, and nutritious enough that humans did not die right away after consuming them – and some humans (the ones that were not poisoned, or killed hunting, or murdered by their fellow men) managed to survive to reproductive age by eating these “natural” foods. Just because the primitive “paleo” diet of our ancestors enabled them to survive long enough to trigger the chain of events that led to us, does not render their lives, or their diets, ideal for emulation in every aspect. We can do better. We must do better – if protection of large numbers of human beings from famine, drought, pests, and prohibitive costs of food is to be considered a moral priority in the least. By depriving human beings of the increased abundance, resilience, and nutritional content that only the genetic modification of foods can provide, anti-GMO campaigners would sentence millions – perhaps billions – of humans to the miserable subsistence conditions and tragically early deaths of their primeval forebears, of whom the Earth could support only a few million without human agricultural interventions.

                We do not need to like Monsanto in order to embrace the life-saving, life-enhancing potential of GMOs. We need to consider the technology involved in GMOs on its own terms, imagining how we would view it if it could be delivered by economic arrangements we would prefer. As a libertarian individualist, I advocate for a world in which GMOs could be produced by thousands of competing firms, each fairly trying to win the business of consumers through the creation of superior products which add value to people’s lives. If you are justifiably concerned about the practices of Monsanto, consider working toward a world like that, instead of a world where the promise of GMOs is denied to the billions who currently owe their very existences to human technology and ingenuity.

My Views on “Eden against the Colossus” – Ten Years Later – Article by G. Stolyarov II

My Views on “Eden against the Colossus” – Ten Years Later – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
June 8, 2013
******************************

Not long after the release of the Second Edition of my 2003-2004 science-fiction mystery novel Eden against the Colossus, I was asked whether any of the views I expressed in the novel had changed since then, and, if so, to what extent.

I still strongly adhere to most of the fundamental philosophical principles expressed in Eden against the Colossus: the existence of an objective reality, the necessity for reason and rigorous inquiry in discovering it, the supreme value of the individual, the virtue of enlightened self-interest, and the immense benefits of technological progress for improving, elevating, and extending the human condition.

In the introduction to the Second Edition I discussed how, in retrospect, the future society described in Eden against the Colossus seems like a pessimistic scenario of how far humanity would progress technologically during the 750 years since the writing of the novel (for instance, in the lack of autonomous artificial intelligence or indefinite life extension – through there are many advanced robots and the average lifespan has increased by perhaps a factor of three in the world initially presented in the novel). This is a world very much characterized by a stark good-versus-evil conflict – that of the individualists/technoprogressives versus the Malthusians/Neo-Luddites. In the novel I occasionally use the term “environmentalists” to describe the Malthusians/Neo-Luddites; today, I would make a subtler distinction between those environmentalists who favor free-market and/or technological solutions to the problems they perceive, and those who see the only solutions as a “return to Nature” and a curtailment of human population. My quarrel is, and has fundamentally always been, only with those environmentalists who seek to reject or limit technological progress – particularly those who would use force to impose their preferences on others. Today I would be more careful to describe my views as anti-Luddite, rather than anti-environmentalist, in order to recognize as possible allies those environmentalists who would embrace technology with incidental benefits such as the reduction of pollution or the more efficient use of resources.

Were I writing the novel today, the society which results as the outcome of the individualist/technoprogressive vision would look quite different as well. The Intergalactic Protectorate is a libertarian system, but a highly centralized one nonetheless. Through its storyline though not through its explicit philosophical ideas, Eden against the Colossus illustrates the vulnerabilities of such a system and the ease of turning the machinery of the Protectorate against the very ideals it is supposed to protect. This is true, I now realize, of any large, centralized institution – public or private, controlled by virtuous people, or by mediocrities or crooks. As an example of this, one needs only to consider how the vast, largely voluntary centralization of information on the Internet – during the age of dominant providers of social-networking, search, and content-hosting services – has enabled sweeping surveillance of virtually all Americans by the National Security Agency through “backdoors” into the systems of the dominant Internet companies. No one person – and no one institution – can be the sole effective guardian of liberty. On the other hand, a society filled with political experiments, as well as experiments in decentralized technologies applied to every area of life, would be much more robust against usurpations of power and incursions against individual rights. Undertakings such as seasteading, a decentralized Meshnet, and Bitcoin have intrigued me in recent years as ways to empower individuals by reducing their dependence on large institutions and decreasing the number of ways by which power asymmetry enables those with ill intentions to get away with inconveniencing or outright oppressing innocent people.

A truly libertarian future will not resemble today’s corporate America on an intergalactic scale, only with considerably less regulation and a more stringently written Constitution enforced by a fourth branch of government possessing negative power only – essentially, the society portrayed in Eden against the Colossus. If humanity is to achieve an intergalactic presence, it will likely be in the form of hundreds of thousands of diverse and autonomous networks of people, largely possessing fluid social and political structures. The balance of power in such a world would greatly favor individuals who are hyperempowered by technology. Furthermore, if technology is to have the ability to radically enhance human intelligence and reasoning, then many of the philosophical disputes that have recurred throughout history may, in future eras, be settled by a more rigorous and nuanced framing of the ideas under consideration. The intellectual conflicts of the future are not likely to be of the hitherto-encountered “capitalist versus socialist” or “technoprogressive versus environmentalist” variety – since the evolution of technology and culture, as well as the shifting dynamics of human societies, will raise new issues of focus which will lead to interesting and unanticipated alignments of persons of various perspectives. It would be entirely possible for some issues to unite erstwhile opponents – as principled libertarians and principled socialists today both detest crony corporatism, or as technoprogressives and some technology-friendly environmentalists today support nuclear power and organisms bioengineered to clean up pollution.

With regard to the personal lives of the characters of Eden against the Colossus, my view today no longer necessitates a glorification of ceaseless work, though productivity remains important to me without a doubt. The enjoyment of the fruits of productive work – and the ability to increase the proportion of one’s time spent in that enjoyment without diminishing one’s productivity – are among the outcomes made possible by technological progress. Such outcomes are insufficiently illustrated in Eden against the Colossus. Moreover, were I to write the novel today, I would have more greatly focused on the ability of a technological society to provide individuals with the opportunity to balance work, leisure, relationships, and a broad awareness of numerous areas of existence.

Along with all of these qualifying statements, however, I nonetheless emphasize my view that the fundamental essence of the conflict depicted in Eden against the Colossus is still a valid and vital subject for contemplation and for consideration of its relevance to our lives. As long as humankind continues to exist in anything resembling its present form, two fundamental motivations – the desire for improvement of the human condition and the desire for restrictive control that would suppress efforts to alter the status quo – will continue to be at odds, in whatever unforeseeable future embodiments they might come to possess. Perhaps sufficient technological progress will shift the balance of human biology, environment, and incentives further away from the command-and-control motive and closer toward the pure motives of amelioration and progress. One can certainly hope.

Immortality: Bio or Techno? – Article by Franco Cortese

Immortality: Bio or Techno? – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 5, 2013
******************************
This essay is the eleventh and final chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first ten chapters were previously published on The Rational Argumentator under the following titles:
***

I Was a Techno-Immortalist Before I Came of Age

From the preceding chapters in this series, one can see that I recapitulated many notions and conclusions found in normative Whole-Brain Emulation. I realized that functional divergence between a candidate functional-equivalent and its original, through the process of virtual or artificial replication of environmental stimuli so as to coordinate their inputs, provides an experimental methodology for empirically validating the sufficiency and efficacy of different approaches. (Note, however, that such tests could not be performed to determine which NRU-designs or replication-approaches would preserve subjective-continuity, if the premises entertained during later periods of my project—that subjective-continuity may require a sufficient degree of operational “sameness”, and not just a sufficient degree of functional “sameness”—are correct.) I realized that we would only need to replicate in intensive detail and rigor those parts of our brain manifesting our personalities and higher cognitive faculties (i.e., the neocortex), and could get away with replicating at lower functional resolution the parts of the nervous system dealing with perception, actuation, and feedback between perception and actuation.

I read Eric Drexler’s Engines of Creation and imported the use of nanotechnology to facilitate both functional-replication (i.e., the technologies and techniques needed to replicate the functional and/or operational modalities of existing biological neurons) and the intensive, precise, and accurate scanning necessitated thereby. This was essentially Ray Kurzweil’s and Robert Freitas’s approach to the technological infrastructure needed for mind-uploading, as I discovered in 2010 via The Singularity is Near.

My project also bears stark similarities with Dmitry Itskov’s Project Avatar. My work on conceptual requirements for transplanting the biological brain into a fully cybernetic body — taking advantage of the technological and methodological infrastructures already in development for use in the separate disciplines of robotics, prosthetics, Brain-Computer Interfaces and sensory-substitution to facilitate the operations of the body — is a prefigurement of his Phase 1. My later work in approaches to functional replication of neurons for the purpose of gradual substrate replacement/transfer and integration also parallel his later phases, in which the brain is gradually replaced with an equivalent computational emulation.

The main difference between the extant Techno-Immortalist approaches, however, is my later inquiries into neglected potential bases for (a) our sense of experiential subjectivity (the feeling of being, what I’ve called immediate subjective-continuity)—and thus the entailed requirements for mental substrates aiming to maintain or attain such immediate subjectivity—and (b) our sense of temporal subjective-continuity (the feeling of being the same person through a process of gradual substrate-replacement—which I take pains to remind the reader already exists in the biological brain via the natural biological process of molecular turnover, which I called metabolic replacement throughout the course of the project), and, likewise, requirements for mental substrates aiming to maintain temporal subjective-continuity through a gradual substrate-replacement/transfer procedure.

In this final chapter, I summarize the main approaches to subjective-continuity thus far considered, including possible physical bases for its current existence and the entailed requirements for NRU designs (that is, for Techno-Immortalist approaches to indefinite-longevity) that maintain such physical bases of subjective-continuity. I will then explore why “Substrate-Independent Minds” is a useful and important term, and try to dispel one particularly common and easy-to-make misconception resulting from it.

Why Should We Worry about SubjectiveContinuity?

This concern marks perhaps the most telling difference between my project and normative Whole-Brain Emulation. Instead of stopping at the presumption that functional equivalence correlates with immediate subjective-continuity and temporal subjective-continuity, I explored several features of neural operation that looked like candidates for providing a basis of both types of subjective-continuity, by looking for those systemic properties and aspects that the biological brain possesses and other physical systems don’t. The physical system underlying the human mind (i.e., the brain) possesses experiential subjectivity; my premise was that we should look for properties not shared by other physical systems to find a possible basis for the property of immediate subjective-continuity. I’m not claiming that any of the aspects and properties considered definitely constitute such a basis; they were merely the avenues I explored throughout my 4-year quest to conquer involuntary death. I do claim, however, that we are forced to conclude that some aspect shared by the individual components (e.g., neurons) of the brain and not shared by other types of physical systems forms such a basis (which doesn’t preclude the possibility of immediate subjective-continuity being a spectrum or gradient rather than a definitive “thing” or process with non-variable parameters), or else that immediate subjective continuity is a normal property of all physical systems, from atoms to rocks.

A phenomenological proof of the non-equivalence of function and subjectivity or subjective-experientiality is the physical irreducibility of qualia – that we could understand in intricate detail the underlying physics of the brain and sense-organs, and nowhere derive or infer the nature of the qualia such underlying physics embodies. To experimentally verify which approaches to replication preserve both functionality and subjectivity would necessitate a science of qualia. This could be conceivably attempted through making measured changes to the operation or inter-component relations of a subject’s mind (or sense organs)—or by integrating new sense organs or neural networks—and recording the resultant changes to his experientiality—that is, to what exactly he feels. Though such recordings would be limited to his descriptive ability, we might be able to make some progress—e.g., he could detect the generation of a new color, and communicate that it is indeed a color that doesn’t match the ones normally available to him, while still failing to communicate to others what the color is like experientially or phenomenologically (i.e., what it is like in terms of qualia). This gets cruder the deeper we delve, however. While we have unchanging names for some “quales” (i.e., green, sweetness, hot, and cold), when it gets into the qualia corresponding with our perception of our own “thoughts” (which will designate all non-normatively perceptual experiential modalities available to the mind—thus, this would include wordless “daydreaming” and exclude autonomic functions like digestion or respiration), we have both far less precision (i.e., fewer words to describe) and less accuracy (i.e., too many words for one thing, which the subject may confuse; the lack of a quantitative definition for words relating to emotions and mental modalities/faculties seems to ensure that errors may be carried forward and increase with each iteration, making precise correlation of operational/structural changes with changes to qualia or experientiality increasingly harder and more unlikely).

Thus whereas the normative movements of Whole-Brain Emulation and Substrate-Independent Minds stopped at functional replication, I explored approaches to functional replication that preserved experientiality (i.e., a subjective sense of anything) and that maintained subjective-continuity (the experiential correlate of feeling like being yourself) through the process of gradual substrate-transfer.

I do not mean to undermine in any way Whole-Brain Emulation and the movement towards Substrate-Independent Minds promoted by such people as Randal Koene via, formerly, his minduploading.org website and, more recently, his Carbon Copies project, Anders Sandberg and Nick Bostrom through their WBE Roadmap, and various other projects on connectomes. These projects are untellably important, but conceptions of subjective-continuity (not pertaining to its relation to functional equivalence) are beyond their scope.

Whether or not subjective-continuity is possible through a gradual-substrate-replacement/transfer procedure is not under question. That we achieve and maintain subjective-continuity despite our constituent molecules being replaced within a period of 7 years, through what I’ve called “metabolic replacement” but what would more normatively be called “molecular-turnover” in molecular biology, is not under question either. What is under question is (a) what properties biological nervous systems possess that could both provide a potential physical basis for subjective-continuity and that other physical systems do not possess, and (b) what the design requirements are for approaches to gradual substrate replacement/transfer that preserve such postulated sources of subjective-continuity.

Graduality

This was the first postulated basis for preserving temporal subjective-continuity. Our bodily systems’ constituent molecules are all replaced within a span of 7 years, which provides empirical verification for the existence of temporal subjective-continuity through gradual substrate replacement. This is not, however, an actual physical basis for immediate subjective-continuity, like the later avenues of enquiry. It is rather a way to avoid causing externally induced subjective-discontinuity, rather than maintaining the existing biological bases for subjective-discontinuity. We are most likely to avoid negating subjective-continuity through a substrate-replacement procedure if we try to maintain the existing degree of graduality (the molecular-turnover or “metabolic-replacement” rate) that exists in biological neurons.

The reasoning behind concerns of graduality also serves to illustrate a common misconception created by the term “Substrate-Independent Minds”. This term should denote the premise that mind can be instantiated on different types of substrate, in the way that a given computer program can run of different types of computational hardware. It stems from the scientific-materialist (a.k.a metaphysical-naturalist) claim that mind is an emergent process not reducible to its isolated material constituents, while still being instantiated thereby. The first (legitimate) interpretation is a refutation against all claims of metaphysical vitalism or substance dualism. The term should not denote the claim that since mind because is software, we can thus send our minds (say, encoded in a wireless signal) from one substrate to another without subjective-discontinuity. This second meaning would incur the emergent effect of a non-gradual substrate-replacement procedure (that is, the wholesale reconstruction of a duplicate mind without any gradual integration procedure). In such a case one stops all causal interaction between components of the brain—in effect putting it on pause. The brain is now static. This is even different than being in an inoperative state, where at least the components (i.e., neurons) still undergo minor operational fluctuations and are still “on” in an important sense (see “Immediate Subjective-Continuity” below), which is not the case here. Beaming between substrates necessitates that all causal interaction—and thus procedural continuity—between software-components is halted during the interval of time in which the information is encoded, sent wirelessly, and subsequently decoded. It would be reinstantiated upon arrival in the new substrate, yes, but not without being put on pause in the interim. The phrase “Substrate-Independent Minds” is an important and valuable one and should be indeed be championed with righteous vehemence—but only in regard to its first meaning (that mind can be instantiated on various different substrates) and not its second, illegitimate meaning (that we ourselves can switch between mental substrates, without any sort of gradual-integration procedure, and still retain subjective-continuity).

Later lines of thought in this regard consisted of positing several sources of subjective-continuity and then conceptualizing various different approaches or varieties of NRU-design that would maintain these aspects through the gradual-replacement procedure.

Immediate Subjective-Continuity

This line of thought explored whether certain physical properties of biological neurons provide the basis for subjective-continuity, and whether current computational paradigms would need to possess such properties in order to serve as a viable substrate-for-mind—that is, one that maintains subjective-continuity. The biological brain has massive parallelism—that is, separate components are instantiated concurrently in time and space. They actually exist and operate at the same time. By contrast, current paradigms of computation, with a few exceptions, are predominantly serial. They instantiate a given component or process one at a time and jump between components or processes so as to integrate these separate instances and create the illusion of continuity. If such computational paradigms were used to emulate the mind, then only one component (e.g., neuron or ion-channel, depending on the chosen model-scale) would be instantiated at a given time. This line of thought postulates that computers emulating the mind may need to be massively parallel in the same way that as the biological brain is in order to preserve immediate subjective-continuity.

Procedural Continuity

Much like the preceding line of thought, this postulates that a possible basis for temporal subjective-continuity is the resting membrane potential of neurons. While in an inoperative state—i.e., not being impinged by incoming action-potentials, or not being stimulated—it (a) isn’t definitively off, but rather produces a baseline voltage that assures that there is no break (or region of discontinuity) in its operation, and (b) still undergoes minor fluctuations from the baseline value within a small deviation-range, thus showing that causal interaction amongst the components emergently instantiating that resting membrane potential (namely ion-pumps) never halts. Logic gates on the other hand do not produce a continuous voltage when in an inoperative state. This line of thought claims that computational elements used to emulate the mind should exhibit the generation of such a continuous inoperative-state signal (e.g., voltage) in order to maintain subjective-continuity. The claim’s stronger version holds that the continuous inoperative-state signal produced by such computational elements undergo minor fluctuations (i.e., state-transitions) allowed within the range of the larger inoperative-state signal, which maintains causal interaction among lower-level components and thus exhibits the postulated basis for subjective-continuity—namely procedural continuity.

Operational Isomorphism

This line of thought claims that a possible source for subjective-continuity is the baseline components comprising the emergent system instantiating mind. In physicality this isn’t a problem because the higher-scale components (e.g., single neurons, sub-neuron components like ion-channels and ion-pumps, and individual protein complexes forming the sub-components of an ion-channel or pump) are instantiated by the lower-level components. Those lower-level components are more similar in terms of the rules determining behavior and state-changes. At the molecular scale, the features determining state-changes (intra-molecular forces, atomic valences, etc.) are the same. This changes as we go up the scale—most notably at the scale of high-level neural regions/systems. In a software model, however, we have a choice as to what scale we use as our model-scale. This postulated source of subjective-continuity would entail that we choose as our model-scale one in which the components of that scale have a high degree of this property (operational isomorphism—or similarity) and that we not choosing a scale at which the components have a lesser degree of this property.

Operational Continuity

This line of thought explored the possibility that we might introduce operational discontinuity by modeling (i.e., computationally instantiating) not the software instantiated by the physical components of the neuron, but instead those physical components themselves—which for illustrative purposes can be considered as the difference between instantiating software and instantiating physics of the logic gates giving rise to the software. Though the software would necessarily be instantiated as a vicarious result of computationally instantiating its biophysical foundation rather than the software directly, we may be introducing additional operational steps and thus adding an unnecessary dimension of discontinuity that needlessly jeopardizes the likelihood of subjective-continuity.

These concerns are wholly divorced from functionalist concerns. If we disregarded these potential sources of subjective-continuity, we could still functionally-replicate a mind in all empirically-verifiable measures yet nonetheless fail to create minds possessing experiential subjectivity. Moreover, the verification experiments discussed in Part 2 do provide a falsifiable methodology for determining which approaches best satisfy the requirements of functional equivalence. They do not, however, provide a method of determining which postulated sources of subjective-continuity are true—simply because we have no falsifiable measures to determine either immediate or temporal subjective-discontinuity, other than functionality. If functional equivalence failed, it would tell us that subjective-continuity failed to be maintained. If functional-equivalence was achieved, however, it doesn’t necessitate that subjective-continuity was maintained.

Bio or Cyber? Does It Matter?

Biological approaches to indefinite-longevity, such as Aubrey de Grey’s SENS and Michael Rose’s Evolutionary Selection for Longevity, among others, have both comparative advantages and drawbacks. The chances of introducing subjective-discontinuity are virtually nonexistent compared to non-biological (which I will refer to as Techno-Immortalist) approaches. This makes them at once more appealing. However, it remains to be seen whether the advantages of the techno-immortalist approach supersede their comparative dangers in regard to their potential to introduce subjective-discontinuity. If such dangers can be obviated, however, it has certain potentials which Bio-Immortalist projects lack—or which are at least comparatively harder to facilitate using biological approaches.

Perhaps foremost among these potentials is the ability to actively modulate and modify the operations of individual neurons, which, if integrated across scales (that is, the concerted modulation/modification of whole emergent neural networks and regions via operational control over their constituent individual neurons), would allow us to take control over our own experiential and functional modalities (i.e., our mental modes of experience and general abilities/skills), thus increasing our degree of self-determination and the control we exert over the circumstances and determining conditions of our own being. Self-determination is the sole central and incessant essence of man; it is his means of self-overcoming—of self-dissent in a striving towards self-realization—and the ability to increase the extent of such self-control, self-mastery, and self-actualization is indeed a comparative advantage of techno-immortalist approaches.

To modulate and modify biological neurons, on the other hand, necessitates either high-precision genetic engineering, or likely the use of nanotech (i.e., NEMS), because whereas the proposed NRUs already have the ability to controllably vary their operations, biological neurons necessitate an external technological infrastructure for facilitating such active modulation and modification.

Biological approaches to increased longevity also appear to necessitate less technological infrastructure in terms of basic functionality. Techno-immortalist approaches require precise scanning technologies and techniques that neither damage nor distort (i.e., affect to the point of operational and/or functional divergence from their normal in situ state of affairs) the features and properties they are measuring. However, there is a useful distinction to be made between biological approaches to increased longevity, and biological approaches to indefinite longevity. Aubrey de Grey’s notion of Longevity Escape Velocity (LEV) serves to illustrate this distinction. With SENS and most biological approaches, he points out that although remediating certain biological causes of aging will extend our lives, by that time different causes of aging that were superseded (i.e., prevented from making a significant impact on aging) by the higher-impact causes of aging may begin to make a non-negligible impact. Aubrey’s proposed solution is LEV: if we can develop remedies for these approaches within the amount of time gained by the remediation of the first set of causes, then we can stay on the leading edge and continue to prolong our lives. This is in contrast to other biological approaches, like Eric Drexler’s conception of nanotechnological cell-maintenance and cell-repair systems, which by virtue of being able to fix any source of molecular damage or disarray vicariously, not via eliminating the source but via iterative repair and/or replacement of the causes or “symptoms” of the source, will continue to work on any new molecular causes of damage without any new upgrades or innovations to their underlying technological and methodological infrastructures.

These would be more appropriately deemed an indefinite-biological-longevity technology, in contrast to biological-longevity technologies. Techno-immortalist approaches are by and large exclusively of the indefinite-longevity-extension variety, and so have an advantage over certain biological approaches to increased longevity, but such advantages do not apply to biological approaches to indefinite longevity.

A final advantage of techno-immortalist approaches is the independence of external environments it provides us. It also makes death by accident far less likely both by enabling us to have more durable bodies and by providing independence from external environments, which means that certain extremes of temperature, pressure, impact-velocity, atmosphere, etc., will not immediately entail our death.

I do not want to discredit any approaches to immortality discussed in this essay, nor any I haven’t mentioned. Every striving and attempt at immortality is virtuous and righteous, and this sentiment will only become more and apparent, culminating on the day when humanity looks back, and wonders how we could have spent so very much money and effort on the Space Race to the Moon with no perceivable scientific, resource, or monetary gain (though there were some nationalistic and militaristic considerations in terms of America not being superseded on either account by Russia), yet took so long to make a concerted global effort to first demand and then implement well-funded attempts to finally defeat death—that inchoate progenitor of 100,000 unprecedented cataclysms a day. It’s true—the world ends 100,000 times a day, to be lighted upon not once more for all of eternity. Every day. What have you done to stop it?

So What?

Indeed, so what? What does this all mean? After all, I never actually built any systems, or did any physical experimentation. I did, however, do a significant amount of conceptual development and thinking on both the practical consequences (i.e., required technologies and techniques, different implementations contingent upon different premises and possibilities, etc.) and the larger social and philosophical repercussions of immortality prior to finding out about other approaches. And I planned on doing physical experimentation and building physical systems; but I thought that working on it in my youth, until such a time as to be in the position to test and implement these ideas more formally via academia or private industry, would be better for the long-term success of the endeavor.

As noted in Chapter 1, this reifies the naturality and intuitive simplicity of indefinite longevity’s ardent desirability and fervent feasibility, along a large variety of approaches ranging from biotechnology to nanotechnology to computational emulation. It also reifies the naturality and desirability of Transhumanism. I saw one of the virtues of this vision as its potential to make us freer, to increase our degree of self-determination, as giving us the ability to look and feel however we want, and the ability to be—and more importantly to become—anything we so desire. Man is marked most starkly by his urge and effort to make his own self—to formulate the best version of himself he can, and then to actualize it. We are always reaching toward our better selves—striving forward in a fit of unbound becoming toward our newest and thus truest selves; we always have been, and with any courage we always will.

Transhumanism is but the modern embodiment of our ancient striving towards increased self-determination and self-realization—of all we’ve ever been and done. It is the current best contemporary exemplification of what has always been the very best in us—the improvement of self and world. Indeed, the ‘trans’ and the ‘human’ in Transhumanism can only signify each other, for to be human is to strive to become more than human—or to become more so human, depending on which perspective you take.

So come along and long for more with me; the best is e’er yet to be!

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Koene, R. (2011). What is carboncopies.org? Retrieved February 28, 2013 from http://www.carboncopies.org/

Rose, M. (October 28 2004). Biological Immortality. In B. Klein, The Scientific Conquest of Death (pp. 17-28). Immortality Institute.

Sandberg, A., & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013 http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

Sandberg, A., & Bostrom, Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013 from http://www.minduploading.org/

de Grey, ADNJ (2004). Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now. PLoS Biol 2(6): e187. doi:10.1371/journal.pbio.0020187

Variations on Alternating Marches, Op. 15 (2002) – Video by G. Stolyarov II

Variations on Alternating Marches, Op. 15 (2002) – Video by G. Stolyarov II

This composition by Mr. Stolyarov was written down in 2002, but some parts of it were virtually impossible to perform for a pianist of any skill. This version of the composition is played using Finale 2011 software and the Steinway Grand Piano instrument.

Two themes predominate in this work. Their juxtaposition is unusual in that the meters of each theme differ. The opening theme (A) is in 2/4 meter, whereas the second main theme (B) is in 3/4 meter. Both themes are marches, though B is noticeably heavier than A both in terms of the mood and the chords involved. The melodies of both A and B are alternated and repeated throughout the work, but the accompaniment changes dramatically. The last appearance of theme A, for instance, has it accompanied by scales of 32nd-notes — a virtually impossible feat for any human musician to execute.

Download the MP3 file of this composition here.

See the index of Mr. Stolyarov’s compositions, all available for free download, here.

The artwork is Mr. Stolyarov’s Abstract Orderism Fractal 48, available for download here and here.

Remember to LIKE, FAVORITE, and SHARE this video in order to spread rational high culture to others.

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 3, 2013
******************************
This essay is the tenth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first nine chapters were previously published on The Rational Argumentator under the following titles:
***

Operational Continuity

One of the reasons for continuing conceptual development of the physical-functionalist NRU (neuron-replication-unit) approach, despite the perceived advantages of the informational-functionalist approach, was in the event that computational emulation would either fail to successfully replicate a given physical process (thus a functional-modality concern) or fail to successfully maintain subjective-continuity (thus an operational-modality concern), most likely due to a difference in the physical operation of possible computational substrates compared to the physical operation of the brain (see Chapter 2). In regard to functionality, we might fail to computationally replicate (whether in simulation or emulation) a relevant physical process for reasons other than vitalism. We could fail to understand the underlying principles governing it, or we might understand its underlying principles so as to predictively model it yet still fail to understand how it affects the other processes occurring in the neuron—for instance if we used different modeling techniques or general model types to model each component, effectively being able to predictively model each individually while being unable to model how they affect eachother due to model untranslatability. Neither of these cases precludes the aspect in question from being completely material, and thus completely potentially explicable using the normative techniques we use to predictively model the universe. The physical-functionalist approach attempted to solve these potential problems through several NRU sub-classes, some of which kept certain biological features and functionally replaced certain others, and others that kept alternate biological features and likewise functionally replicated alternate biological features. These can be considered as varieties of biological-nonbiological NRU hybrids that functionally integrate those biological features into their own, predominantly non-biological operation, as they exist in the biological nervous system, which we failed to functionally or operationally replicate successfully.

The subjective-continuity problem, however, is not concerned with whether something can be functionally replicated but with whether it can be functionally replicated while still retaining subjective-continuity throughout the procedure.

This category of possible basis for subjective-continuity has stark similarities to the possible problematic aspects (i.e., operational discontinuity) of current computational paradigms and substrates discussed in Chapter 2. In that case it was postulated that discontinuity occurred as a result of taking something normally operationally continuous and making it discontinuous: namely, (a) the fact that current computational paradigms are serial (whereas the brain has massive parallelism), which may cause components to only be instantiated one at a time, and (b) the fact that the resting membrane potential of biological neurons makes them procedurally continuous—that is, when in a resting or inoperative state they are still both on and undergoing minor fluctuations—whereas normative logic gates both do not produce a steady voltage when in an inoperative state (thus being procedurally discontinuous) and do not undergo minor fluctuations within such a steady-state voltage (or, more generally, a continuous signal) while in an inoperative state. I had a similar fear in regard to some mathematical and computational models as I understood them in 2009: what if we were taking what was a continuous process in its biological environment, and—by using multiple elements or procedural (e.g., computational, algorithmic) steps to replicate what would have been one element or procedural step in the original—effectively making it discontinuous by introducing additional intermediate steps? Or would we simply be introducing a number of continuous steps—that is, if each element or procedural step were operationally continuous in the same way that the components of a neuron are, would it then preserve operational continuity nonetheless?

This led to my attempting to develop a modeling approach aiming to retain the same operational continuity as exists in biological neurons, which I will call the relationally isomorphic mathematical model. The biophysical processes comprising an existing neuron are what implements computation; by using biophysical-mathematical models as our modeling approach, we might be introducing an element of discontinuity by mathematically modeling the physical processes giving rise to a computation/calculation, rather than modeling the computation/calculation directly. It might be the difference between modeling a given program, and the physical processes comprising the logic elements giving rise to the program. Thus, my novel approach during this period was to explore ways to model this directly.

Rather than using a host of mathematical operations to model the physical components that themselves give rise to a different type of mathematics, we instead use a modeling approach that maintains a 1-to-1 element or procedural-step correspondence with the level-of-scale that embodies the salient (i.e., aimed-for) computation. My attempts at developing this produced the following approach, though I lack the pure mathematical and computer-science background to judge its true accuracy or utility. The components, their properties, and the inputs used for a given model (at whatever scale) are substituted by numerical values, the magnitude of which preserves the relationships (e.g., ratio relationships) between components/properties and inputs, and by mathematical operations which preserve the relationships exhibited by their interaction. For instance: if the interaction between a given component/property and a given input produces an emergent inhibitory effect biologically, then one would combine them to get their difference or their factors, respectively, depending on whether they exemplify a linear or nonlinear relationship. If the component/property and the input combine to produce emergently excitatory effects biologically, one would combine them to get their sum or products, respectively, depending on whether they increased excitation in a linear or nonlinear manner.

In an example from my notes, I tried to formulate how a chemical synapse could be modeled in this way. Neurotransmitters are given analog values such as positive or negative numbers, the sign of which (i.e., positive or negative) depends on whether it is excitatory or inhibitory and the magnitude of which depends on how much more excitatory/inhibitory it is than other neurotransmitters, all in reference to a baseline value (perhaps 0 if neutral or neither excitatory nor inhibitory; however, we may need to make this a negative value, considering that the neuron’s resting membrane-potential is electrically negative, and not electrochemically neutral). If they are neurotransmitter clusters, then one value would represent the neurotransmitter and another value its quantity, the sum or product of which represents the cluster. If the neurotransmitter clusters consist of multiple neurotransmitters, then two values (i.e., type and quantity) would be used for each, and the product of all values represents the cluster. Each summative-product value is given a second vector value separate from its state-value, representing its direction and speed in the 3D space of the synaptic junction. Thus by summing the products of all, the numerical value should contain the relational operations each value corresponds to, and the interactions and relationships represented by the first- and second-order products. The key lies in determining whether the relationship between two elements (e.g., two neurotransmitters) is linear (in which case they are summed), or nonlinear (in which case they are combined to produce a product), and whether it is a positive or negative relationship—in which case their factor, rather than their difference, or their product, rather than their sum, would be used. Combining the vector products would take into account how each cluster’s speed and position affects the end result, thus effectively emulating the process of diffusion across the synaptic junction. The model’s past states (which might need to be included in such a modeling methodology to account for synaptic plasticity—e.g., long-term potentiation and long-term modulation) would hypothetically be incorporated into the model via a temporal-vector value, wherein a third value (position along a temporal or “functional”/”operational” axis) is used when combining the values into a final summative product. This is similar to such modeling techniques as phase-space, which is a quantitative technique for modeling a given system’s “system-vector-states” or the functional/operational states it has the potential to possess.

How excitatory or inhibitory a given neurotransmitter is may depend upon other neurotransmitters already present in the synaptic junction; thus if the relationship between one neurotransmitter and another is not the same as that first neurotransmitter and an arbitrary third, then one cannot use static numerical values for them because the sequence in which they were released would affect how cumulatively excitatory or inhibitory a given synaptic transmission is.

A hypothetically possible case of this would be if one type of neurotransmitter can bond or react with two or more types of neurotransmitter. Let’s say that it’s more likely to bond or react with one than with the other. If the chemically less attractive (or reactive) one were released first, it would bond anyways due to the absence of the comparatively more chemically attractive one, such that if the more attractive one were released thereafter, then it wouldn’t bond because the original one would have already bonded with the chemically less attractive one.

If a given neurotransmitter’s numerical value or weighting is determined by its relation to other neurotransmitters (i.e., if one is excitatory, and another is twice as excitatory, then if the first was 1.5, the second would be 3—assuming a linear relationship), and a given neurotransmitter does prove to have a different relationship to one neurotransmitter than it does another, then we cannot use a single value for it. Thus we might not be able to configure it such that the normative mathematical operations follow naturally from each other; instead, we may have to computationally model (via the [hypothetically] subjectively discontinuous method that incurs additional procedural steps) which mathematical operations to perform, and then perform them continuously without having to stop and compute what comes next, so as to preserve subjective-continuity.

We could also run the subjectively discontinuous model at a faster speed to account for its higher quantity of steps/operations and the need to keep up with the relationally isomorphic mathematical model, which possesses comparatively fewer procedural steps. Thus subjective-continuity could hypothetically be achieved (given the validity of the present postulated basis for subjective-continuity—operational continuity) via this method of intermittent external intervention, even if we need extra computational steps to replicate the single informational transformations and signal-combinations of the relationally isomorphic mathematical model.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Productivity Enhancement – Video Series by G. Stolyarov II

Productivity Enhancement – Video Series by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
June 2, 2013
******************************

In this series on productivity enhancement, taken from Mr. Stolyarov’s e-book The Best Self-Help is Free, Mr. Stolyarov discusses the fundamental nature of productivity and approaches by which any person can become more productive.

This series is based on Chapters 7-14 of The Best Self-Help is Free.

Part 1 – What is Productivity?

The most reliable way to achieve incremental progress in your life is by addressing and continually improving your own productivity. Productivity constitutes the difference between a world in which life is nasty, brutish, and short and one in which it is pleasant, civilized, and ever-increasing in length.

Part 2 – Reason and the Decisional Component of Productivity

In order to properly decide what ought to be produced, man can ultimately consult only one guide: his rational faculty.

Part 3 – Perfectionism — The Number One Enemy of Productivity

Perfectionism engenders a pervasive sense of futility in its practitioner and mentally inhibits him from pursuing further productive work.

Part 4 – Quantification and Productivity Targets

Quantification enables an individual to set productivity targets for himself and to escape underachievement on one hand and perfectionism on the other.

Part 5 – Habit and the Elimination of the Quality-Quantity Tradeoff

A common fallacy presumes that there is a necessary tradeoff between the quantity of work produced and the quality of that work. By this notion, one can either produce a lot of mediocre units of output or a scant few exceptional ones. While this might be true in some cases, it overlooks several important factors.

Part 6 – The Importance of Frameworks for Productivity

Time-saving, productivity-enhancing frameworks can be applied on a personal level to enable one to overcome the human mind’s limited ability to hold and process multiple pieces of information simultaneously.

Part 7 – The Benefits of Repetition to Productivity

One of the most reliable ways to reduce the amount of mental effort per unit of productive output is to create many extremely similar units of output in succession. Mr. Stolyarov discusses the advantages of structuring one’s work so as to perform many similar tasks in close succession.

Part 8 – Making Accomplishments Work for You

Producing alone is not enough. If you just let your output lie around accumulating dust or taking up computer memory, it will not boost your overall well-being. Your accomplishments can help procure health, reputation, knowledge, safety, and happiness for you — if you think about how to put them to use.

Iraq Collapse Shows Bankruptcy of Interventionism – Article by Ron Paul

Iraq Collapse Shows Bankruptcy of Interventionism – Article by Ron Paul

The New Renaissance Hat
Ron Paul
June 2, 2013
******************************

May was Iraq’s deadliest month in nearly five years, with more than 1,000 dead – both civilians and security personnel — in a rash of bombings, shootings and other violence. As we read each day of new horrors in Iraq, it becomes more obvious that the US invasion delivered none of the promised peace or stability that proponents of the attack promised.

Millions live in constant fear, refugees do not return home, and the economy is destroyed. The Christian community, some 1.2 million persons before 2003, has been nearly wiped off the Iraqi map. Other minorities have likewise disappeared. Making matters worse, US support for the Syrian rebels next door has drawn the Shi’ite-led Iraqi government into the spreading regional unrest and breathed new life into extremist elements.

The invasion of Iraq opened the door to Al-Qaeda in Iraq, which did not exist beforehand, while simultaneously strengthening the hand of Iran in the region. Were the “experts” who planned for and advocated the US attack really this incompetent?

Ryan Crocker, who was US Ambassador to Iraq from 2007-2009, still speaks of the Iraqi “surge” as a great reconciliation between Sunni and Shi’ite in Iraq. He wrote recently that “[t]hough the United States has withdrawn its troops from Iraq, it retains significant leverage there. Iraqi forces were equipped and trained by Americans, and the country’s leaders need and expect our help.” He seems alarmingly out of touch with reality.

It is clear now that the “surge” and the “Iraqi Awakening” were just myths promoted by those desperate to put a positive spin on the US invasion, which the late General William Odom once called, “the greatest strategic disaster in American history.” Aircraft were loaded with $100 dollar bills to pay each side to temporarily stop killing US troops and each other, but the payoff provided a mere temporary break. Shouldn’t the measure of success of a particular policy be whether it actually produces sustained positive results?

Now we see radical fighters who once shot at US troops in Iraq have spilled into Syria, where they ironically find their cause supported by the US government! Some of these fighters are even greeted by visiting US senators.

The US intervention in Iraq has created ever more problems. That is clear. The foreign policy “experts” who urged the US attack on Iraq now claim that the disaster they created can only be solved with more interventionism! Imagine a medical doctor noting that a particular medication is killing his patient, but to combat the side effect he orders an increase in dosage of the same medicine. Like this doctor, the US foreign policy establishment is guilty of malpractice. And, I might add, this is just what the Fed does with monetary policy.

From Iraq to Libya to Mali to Syria to Afghanistan, US interventions have an unbroken record of making matters far worse. Yet regardless of the disasters produced, for the interventionists a more aggressive US foreign policy is the only policy they offer.

We must learn the appropriate lessons from the disaster of Iraq. We cannot continue to invade countries, install puppet governments, build new nations, create centrally planned economies, engage in social engineering, and force democracy at the barrel of a gun. The rest of the world is tired of US interventionism, and the US taxpayer is tired of footing the bill for US interventionism. It is up to all of us to make it very clear to the foreign-policy establishment and the powers that be that we have had enough and will no longer tolerate empire-building. We should be more confident in ourselves and stop acting like an insecure bully.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission.

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

Choosing the Right Scale for Brain Emulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 2, 2013
******************************
This essay is the ninth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first eight chapters were previously published on The Rational Argumentator under the following titles:
***

The two approaches falling within this class considered thus far are (a) computational models that model the biophysical (e.g., electromagnetic, chemical, and kinetic) operation of the neurons—i.e., the physical processes instantiating their emergent functionality, whether at the scale of tissues, molecules and/or atoms, and anything in between—and (b) abstracted models, a term which designates anything that computationally models the neuron using the (sub-neuron but super-protein-complex) components themselves as the chosen model-scale (whereas the latter uses for its chosen model-scale the scale at which physical processes emergently instantiating those higher-level neuronal components exist, such as the membrane and individual proteins forming the transmembrane protein-complexes), regardless of whether each component is abstracted as a normative-electrical-component analogue (i.e., using circuit diagrams in place of biological schematics, like equating the lipid bilayer membrane with a capacitor connected to a variable battery) or mathematical models in which a relevant component or aspect of the neuron becomes a term (e.g., a variable or constant) in an equation.

It was during the process of trying to formulate different ways of mathematically (and otherwise computationally) modeling neurons or sub-neuron regions that I laid the conceptual embryo of the first new possible basis for subjective-continuity: the notion of operational isomorphism.

A New Approach to Subjective-Continuity Through Substrate Replacement

There are two other approaches to increasing the likelihood of subjective-continuity, each based on the presumption of two possible physical bases for discontinuity, that I explored during this period. Note that these approaches are unrelated to graduality, which has been the main determining factor impacting the likelihood of subjective-continuity considered thus far. The new approaches consist of designing the NRUs so as to retain the respective postulated physical bases for subjective-continuity that exist in the biological brain. Thus they are unrelated to increasing the efficacy of the gradual-replacement procedure itself, instead being related to the design requirements of functional-equivalents used to gradually replace the neurons that maintain immediate subjective-continuity.

Operational Isomorphism

Whereas functionality deals only with the emergent effects or end-product of a given entity or process, operationality deals with the procedural operations performed so as to give rise to those emergent effects. A mathematical model of a neuron might be highly functionally equivalent while failing to be operationally equivalent in most respects. Isomorphism can be considered a measure of “sameness”, but technically means a 1-to-1 correspondence between the elements of two sets (which would correspond with operational isomorphism) or between the sums or products of the elements of two sets (which would correspond with functional isomorphism, using the definition of functionality employed above). Thus, operational isomorphism is the degree with which the sub-components (be they material as in entities or procedural as in processes) of the two larger-scale components, or the operational modalities possessed by each respective collection of sub-components, are equivalent.

To what extent does the brain possess operational isomorphism? It seems to depend on the scale being considered. At the highest scale, different areas of the nervous system are classed as systems (as in functional taxonomies) or regions (as in anatomical taxonomies). At this level the separate regions (i.e., components of a shared scale) differ widely from one another in terms of operational-modality; they process information very differently from the way other components on the same scale process information. If this scale was chosen as the model-scale of our replication-approach and the preceding premise (that the physical basis for subjective-continuity is the degree of operational isomorphism between components at a given scale) is accepted, then we would in such a case have a high probability of replicating functionality, but a low probability of retaining subjective-continuity through gradual replacement. This would be true even if we used the degree of operational isomorphism between separate components as the only determining factor for subjective-continuity, and ignored concerns of graduality (e.g., the scale or rate—or scale-to-rate ratio—at which gradual substrate replacement occurs).

Contrast this to the molecular scale, where the operational modality of each component (being a given molecule) and the procedural rules determining the state-changes of components at this scale are highly isomorphic. The state-changes of a given molecule are determined by molecular and atomic forces. Thus if we use an informational-functionalist approach, choose a molecular scale for our model, and accept the same premises as the first example, we would have a high probability of both replicating functionality and retaining subjective-continuity through gradual replacement because the components (molecules) have a high degree of operational isomorphism.

Note that this is only a requirement for the sub-components instantiating the high-level neural regions/systems that embody our personalities and higher cognitive faculties such as the neocortex — i.e., we wouldn’t have to choose a molecular scale as our model scale (if it proved necessary for the reasons described above) for the whole brain, which would be very computationally intensive.

So at the atomic and molecular scale the brain possesses a high degree of operational isomorphism. On the scale of the individual protein complexes, which collectively form a given sub-neuronal component (e.g., ion channel), components still appear to possess a high degree of operational isomorphism because all state-changes are determined by the rules governing macroscale proteins and protein-complexes (i.e., biochemistry and particularly protein-protein interactions); by virtue of being of the same general constituents (amino acids), the factors determining state-changes at this level are shared by all components at this scale. The scale of individual neuronal components, however, seems to possess a comparatively lesser degree of operational isomorphism. Some ion channels are ligand-gated while others are voltage-gated. Thus, different aspects of physicality (i.e., molecular shape and voltage respectively) form the procedural-rules determining state-changes at this scale. Since there are now two different determining factors at this scale, its degree of operational isomorphism is comparatively less than the protein and protein-complex scale and the molecular scale, both of which appear to have only one governing procedural-rule set. The scale of individual neurons by contrast appears to possess a greater degree of operational isomorphism; every neuron fires according to its threshold value, and sums analog action-potential values into a binary output (i.e., neuron either fires or does not). All individual neurons operate in a highly isomorphic manner. Even though individual neurons of a given type are more operationally isomorphic in relation to each other than with a neuron of another type, all neurons regardless of type still act in a highly isomorphic manner. However, the scale of neuron-clusters and neural-networks, which operate and communicate according to spatiotemporal sequences of firing patterns (action-potential patterns), appears to possess a lesser degree of operational isomorphism compared to individual neurons, because different sequences of firing patterns will mean a different thing to two respective neural clusters or networks. Also note that at this scale the degree of functional isomorphism between components appears to be less than their degree of operational isomorphism—that is, the way each cluster or network operates is more similar in relation to each other than is their actual function (i.e., what they effectively do). And lastly, at the scale of high-level neural regions/systems, components (i.e., neural regions) differ significantly in morphology, in operationality, and in functionality; thus they appear to constitute the scale that possesses the least operational isomorphism.

I will now illustrate the concept of operational isomorphism using the physical-functionalist and the informational-functionalist NRU approaches, respectively, as examples. In terms of the physical-functionalist (i.e., prosthetic neuron) approach, both the passive (i.e., “direct”) and CPU-controlled sub-classes, respectively, are operationally isomorphic. An example of a physical-functionalist NRU that would not possess operational isomorphism is one that uses a passive-physicalist approach for the one type of component (e.g., voltage-gated ion channel) and a CPU-controlled/cyber-physicalist approach [see Part 4 of this series] for another type of component (e.g., ligand-gated ion channel)—on that scale the components act according to different technological and methodological infrastructures, exhibit different operational modalities, and thus appear to possess a low degree of operational isomorphism. Note that the concern is not the degree of operational isomorphism between the functional-replication units and their biological counterparts, but rather with the degree of operational isomorphism between the functional-replication units and other units on the same scale.

Another possibly relevant type of operational isomorphism is the degree of isomorphism between the individual sub-components or procedural-operations (i.e., “steps”) composing a given component, designated here as intra-operational isomorphism. While very similar to the degree of isomorphism for the scale immediately below, this differs from (i.e., is not equivalent to) such a designation in that the sub-components of a given larger component could be functionally isomorphic in relation to each other without being operationally isomorphic in relation to all other components on that scale. The passive sub-approach of the physical-functionalist approach would possess a greater degree of intra-operational isomorphism than would the CPU-controlled/cyber-physicalist sub-approach, because presumably each component would interact with the others (via physically embodied feedback) according to the same technological and methodological infrastructure—be it mechanical, electrical, chemical, or otherwise. The CPU-controlled sub-approach by contrast would possess a lesser degree of intra-operational-isomorphism, because the sensors, CPU, and the electric or electromechanical systems, respectively (the three main sub-components for each singular neuronal component—e.g., an artificial ion channel), operate according to different technological and methodological infrastructures and thus exhibit alternate operational modalities in relation to eachother.

In regard to the informational-functionalist approach, an NRU model that would be operationally isomorphic is one wherein, regardless of the scale used, the type of approach used to model a given component on that scale is as isomorphic with the ones used to model other components on the same scale as is possible. For example, if one uses a mathematical model to simulate spiking regions of the dendritic spine, then one shouldn’t use a non-mathematical (e.g., strict computational-logic) approach to model non-spiking regions of the dendritic spine. Since the number of variations to the informational-functionalist approach is greater than could exist for the physical-functionalist approach, there are more gradations to the degree of operational isomorphism. Using the exact same branches of mathematics to mathematically model the two respective components would incur a greater degree of operational isomorphism than if we used alternate mathematical techniques from different disciplines to model them. Likewise, if we used different computational approaches to model the respective components, then we would have a lesser degree of operational isomorphism. If we emulated some components while merely simulating others, we would have a lesser degree of operational isomorphism than if both were either strictly simulatory or strictly emulatory.

If this premise proves true, it suggests that when picking the scale of our replication-approach (be it physical-functionalist or informational-functionalist), we choose a scale that exhibits operational isomorphism—for example, the molecular scale rather than the scale of high-level neural-regions, and that we don’t use widely dissimilar types of modeling techniques to model one component (e.g., a molecular system) than we do for another component on the same scale.

Note that unlike operational-continuity, the degree of operational isomorphism was not an explicit concept or potential physical basis for subjective-continuity at the time of my working on immortality (i.e., this concept wasn’t yet fully fleshed out in 2010), but rather was formulated in response to going over my notes from this period so as to distill the broad developmental gestalt of my project; though it appears to be somewhat inherent (i.e., appears to be hinted at), it hasn’t been explicitized until relatively recently.

The next chapter describes the rest of my work on technological approaches to techno-immortality in 2010, focusing on a second new approach to subjective-continuity through a gradual-substrate-replacement procedure, and concluding with an overview of the ways my project differs from the other techno-immortalist projects.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Tapping the Transcendence Drive – Article by D.J. MacLennan

Tapping the Transcendence Drive – Article by D.J. MacLennan

The New Renaissance Hat
D. J. MacLennan
June 2, 2013
******************************

What do we want? No, I mean, what do we really want?

Your eyes flick back and forth between your smartphone and your iPad; your coffee cools on the dusty coaster beside the yellowing PC monitor; you momentarily look to the green vista outside your window but don’t fully register it; Facebook fade-scrolls the listless postings of tens of phase-locked ‘friends’, while the language-association areas of your brain chisel at your clumsy syntax, relentlessly sculpting it down to the 140-character limit of your next Twitter post.

The noise, the noise; the pink and the brown, the blue and the white. What do we want? How do we say it?

As I am a futurist, it’s understandable that people sometimes ask me what I can tell them about the future. What do I say? How about, “Well, it won’t be the same as the past”? On many levels, this is an unsatisfying answer. But, importantly, it is neither a stupid nor an empty one. If it sounds a bit Zen, that is only because people as used to a mode of thinking about the future that has it looking quite a lot like the past but with more shiny bits and bigger (and much flatter) flatscreens.

What I prefer to say, when there is more time available for the conversation, is, “It depends on what you, and others, want, and upon what you do to get those things.” Another unsatisfying response?

Where others see shiny stuff, I see the physical manifestations of drives. After all, what are Facebook, Twitter, and iPads but manifestations of drives? Easy, isn’t it? We can now glibly state that Twitter and Facebook are manifestations of the drive to communicate, and that the iPad is a manifestation of the desire to possess shiny stuff that does a slick job of enabling us to better pursue our recreational, organizational, and communicational drives.

There are, however, problems with this way of looking at drives. If, for example, we assume, based on the evidence we see from the boom in the use of communication technologies, that people have a strong drive to stay in touch with each other, we will simply churn out more and more of the same kinds of communication devices and platforms. If, on the other hand, we look at what is the overarching drive driving the desire to communicate, we can better address the real needs of the end user.

PongAs another example, we look back to early computer gaming. What was the main drive of the teenager playing Pong on Atari’s first arcade version of the game, released in 1972? If you asked this question to an impartial observer in 1972, they might well have opined that the fun of Pong stemmed from the fact that it was like table tennis; table tennis is fun, so a bleepy digital version of it in a big yellow box should also be fun. While not completely incorrect, such an opinion would be based solely upon the then-current gaming context. In following the advice of such an observer, an arcade-game manufacturer might have invested, and probably lost, an enormous amount of money in producing more and more electronic versions of simple tabletop games. But, fortunately for the computer-game industry, many manufacturers realized that the fun of arcade games was largely in the format, and so began to abandon the notion that they should be digital representations of physical games.

If we jump to a modern MMORPG game involving player avatars, such as World of Warcraft, we find a situation radically different from that which prevailed in 1972, but I would argue that many observers still make the same kinds of mistakes in extrapolating the drives of the players. It’s all about “recreation” and “role-playing”, right?

I think that many technology manufacturers underestimate and misunderstand our true drives. I admit to being an optimist on such matters, but what if, just for a moment, we assume that the drives of technology-obsessed human beings (even the ones playing Angry Birds, or posting drunken nonsense on Facebook) are actually grand and noble ones? What if we really think about what it is that they are trying to do? Now we begin to get somewhere. We can then see the Facebook postings as an individual’s yearning for registration of his or her existence; a drive towards self-actualization with a voice augmented beyond the hoarse squeak of the physical one. We can see individuals’ appreciation of the clean lines of their iPads as a desire for rounded-corner order in a world of filth and tangle. We can see their enjoyment of moving their avatar around World of Warcraft as the beginnings of a massive stretching of their concept of self, to a point where it might break open and merge colorfully with the selves of others.

E-Book Reader

One hundred and forty characters: I know it doesn’t look much like a drive for knowledge and transcendence, but so what? Pong didn’t look much like Second Life; the telegraph didn’t look much like the iPad. The past is a poor guide to the future. A little respect for, and more careful observation of, what might be the true drives of the technology-obsessed would, I think, help us to create a future enhanced by enabling technologies, and not one awash with debilitating noise.

D.J. MacLennan is a futurist writer and entrepreneur, and is signed up with Alcor for cryonic preservation. He lives in, and works from, a modern house overlooking the sea on the coast of the Isle of Skye, in the Highlands of Scotland.

See more of D.J.’s writing at extravolution.com and futurehead.com.