Browsed by
Month: May 2013

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

Gradual Neuron Replacement for the Preservation of Subjective-Continuity – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 19, 2013
******************************
This essay is the fourth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first three chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, and “Concepts for Functional Replication of Biological Neurons“.
***

Gradual Uploading Applied to Single Neurons (2008)

In early 2008 I was trying to conceptualize a means of applying the logic of gradual replacement to single neurons under the premise that extending the scale of gradual replacement to individual sections of the neuronal membrane and its integral membrane proteins—thus increasing the degree of graduality between replacement sections—would increase the likelihood of subjective-continuity through substrate transfer. I also started moving away from the use of normative nanotechnology as the technological and methodological infrastructure for the NRUs, as it would delay the date at which these systems could be developed and experimentally verified. Instead I started focusing on conceptualizing systems that electromechanically replicate the functional modalities of the small-scale integral-membrane-components of the neuron. I was calling this approach the “active mechanical membrane” to differentiate it from the electro-chemical-mechanical modalities of the nanotech approach. I also started using MEMS rather than NEMS for the underlying technological infrastructure (because MEMS are less restrictive) while identifying NEMS as preferred.

I felt that trying to replicate the metabolic replacement rate in biological neurons should be the ideal to strive for, since we know that subjective-continuity is preserved through the gradual metabolic replacement (a.k.a. molecular-turnover) that occurs in the existing biological brain. My approach was to measure the normal rate of metabolic replacement in existing biological neurons and the scale at which such replacement occurs (i.e., are the sections being replaced metabolically with single molecules, molecular complexes, or whole molecular clusters?). Then, when replacing sections of the membrane with electromechanical functional equivalents, the same ratio of replacement-section size to replacement-time factor would be applied—that is, the time between sectional replacement would be increased in proportion to how much larger the sectional-replacement section/scale is compared to the existing scale of metabolic replacement-sections/scale. Replacement size/scale is defined as the size of the section being replaced—and so would be molecular complexes in the case of normative metabolic replacement. Replacement time is defined as the interval of time between a given section being replaced and a section that it has causal connection with is replaced; in metabolic replacement it is the time interval between a given molecular complex being replaced and an adjacent (or directly-causally-connected) molecular complex being replaced.

I therefore posited the following formula:

 Ta = (Sa/Sb)*Tb,

where Sa is the size of the artificial-membrane-replacement sections, Sb is the size of the metabolic replacement sections, Tb is the time interval between the metabolic replacement of two successive metabolic replacement sections, and Ta is the time interval needing to be applied to the comparatively larger artificial-membrane-replacement sections so as to preserve the same replacement-rate factor (and correspondingly the same degree of graduality) that exists in normative metabolic replacement through the process of gradual replacement on the comparatively larger scale of the artificial-membrane sections.

The use of the time-to-scale factor corresponding with normative molecular turnover or “metabolic replacement” follows from the fact that we know subjective-continuity through substrate replacement is successful at this time-to-scale ratio. However, the lack of a non-arbitrarily quantifiable measure of time and the fact that that time is infinitely divisible (i.e., it can be broken down into smaller intervals to an arbitrarily large degree) logically necessitates that the salient variable is not time, but rather causal interaction between co-affective or “causally coupled” components. Interaction between components and the state transitions each component or procedural step undergo are the only viable quantifiable measures of time. Thus, while time is the relevant variable in the above equation, a better (i.e., more methodologically rigorous) variable would be a measure of either (a) the number of causal interactions occurring between co-affective or “adjacent” components within the interval of replacement time Ta, which is synonymous with the frequency of causal interaction; or (b) the number of state-transitions a given component undergoes within the interval of time Ta. While they should be generally correlative, in that state-transitions are facilitated via causal interaction among components, state-transitions may be a better metric because they allow us to quantitatively compare categorically dissimilar types of causal interaction that otherwise couldn’t be summed into a single variable or measure. For example, if one type of molecular interaction has a greater effect on the state-transitions of either component involved (i.e., facilitates a comparatively greater state-transition) than does another type of molecular interaction, then quantifying a measure of causal interactions may be less accurate than quantifying a measure of the magnitude of state-transitions.

In this way the rate of gradual replacement, despite being on a scale larger than normative metabolic replacement, would hypothetically follow the same degree of graduality with which biological metabolic replacement occurs. This was meant to increase the likelihood of subjective-continuity through a substrate-replacement procedure (both because it is necessarily more gradual than gradual replacement of whole individual neurons at a time, and because it preserves the degree of graduality that exists through the normative metabolic replacement that we already undergo).

Replicating Neuronal Membrane and Integral Membrane Components

Thus far there have been 2 main classes of neuron-replication approach identified: informational-functionalist and physical-functionalist, the former corresponding to computational and simulation/emulation approaches and the latter to physically embodied, “prosthetic” approaches.

The physicalist-functionalist approach, however, can at this point be further sub-divided into two sub-classes. The first can be called “cyber-physicalist-functionalist”, which involves controlling the artificial ion-channels and receptor-channels via normative computation (i.e., an internal CPU or controller-circuit) operatively connected to sensors and to the electromechanical actuators and components of the ion and receptor channels (i.e., sensing the presence of an electrochemical gradient or difference in electrochemical potential [equivalent to relative ionic concentration] between the respective sides of a neuronal membrane, and activating the actuators of the artificial channels to either open or remain closed, based upon programmed rules). This sub-class is an example of a cyber-physical system, which designates any system with a high level of connection or interaction between its physical and computational components, itself a class of technology that grew out of embedded systems, which designates any system using embedded computational technology and includes many electronic devices and appliances.

This is one further functional step removed from the second approach, which I was then simply calling the “direct” method, but which would be more accurately called the passive-physicalist-functionalist approach. Electronic systems are differentiated from electric systems by being active (i.e., performing computation or more generally signal-processing), whereas electric systems are passive and aren’t meant to transform (i.e., process) incoming signals (though any computational system’s individual components must at some level be comprised of electric, passive components). Whereas the cyber-physicalist-functionalist sub-class has computational technology controlling its processes, the passive-physicalist-functionalist approach has components emergently constituting a computational device. This consisted of providing the artificial ion-channels with a means of opening in the presence of a given electric potential difference (i.e., voltage) and the receptor-channels with a means of opening in response to the unique attributes of the neurotransmitter it corresponds to (such as chemical bonding as in ligand-based receptors, or alternatively in response to its electrical properties in the same manner – i.e., according to the same operational-modality – as the artificial ion channels), without a CPU correlating the presence of an attribute measured by sensors with the corresponding electromechanical behavior of the membrane needing to be replicated in response thereto. Such passive systems differ from computation in that they only require feedback between components, wherein a system of mechanical, electrical, or electromechanical components is operatively connected so as to produce specific system-states or processes in response to the presence of specific sensed system-states of its environment or itself. An example of this in regards to the present case would be constructing an ionic channel from piezoelectric materials, such that the presence of a certain electrochemical potential induces internal mechanical strain in the material; the spacing, dimensions and quantity of segments would be designed so as to either close or open, respectively, as a single unit when eliciting internal mechanical strain in response to one electrochemical potential while remaining unresponsive (or insufficiently responsive—i.e., not opening all the way) to another electrochemical potential. Biological neurons work in a similarly passive way, in which systems are organized to exhibit specific responses to specific stimuli in basic stimulus-response causal sequences by virtue of their own properties rather than by external control of individual components via CPU.

However, I found the cyber-physicalist approach preferable if it proved to be sufficient due to the ability to reprogram computational systems, which isn’t possible in passive systems without necessitating a reorganization of the component—which itself necessitates an increase in the required technological infrastructure, thereby increasing cost and design-requirements. This limit on reprogramming also imposes a limit on our ability to modify and modulate the operation of the NRUs (which will be necessary to retain the function of neural plasticity—presumably a prerequisite for experiential subjectivity and memory). The cyber-physicalist approach also seemed preferable due to a larger degree of variability in its operation: it would be easier to operatively connect electromechanical membrane components (e.g., ionic channels, ion pumps) to a CPU, and through the CPU to sensors, programming it to elicit a specific sequence of ionic-channel opening and closing in response to specific sensor-states, than it would be to design artificial ionic channels to respond directly to the presence of an electric potential with sufficient precision and accuracy.

In the cyber-physicalist-functionalist approach the membrane material is constructed so as to be (a) electrically insulative, while (b) remaining thin enough to act as a capacitor via the electric potential differential (which is synonymous with voltage) between the two sides of the membrane.

The ion-channel replacement units consisted of electromechanical pores that open for a fixed amount of time in the presence of an ion gradient (a difference in electric potential between the two sides of the membrane); this was to be accomplished electromechanically via a means of sensing membrane depolarization (such as through the use of reference electrodes) connected to a microcircuit (or nanocircuit, hereafter referred to as a CPU) programmed to open the electromechanical ion-channels for a length of time corresponding to the rate of normative biological repolarization (i.e., the time it takes to restore the membrane polarization to the resting-membrane-potential following an action-potential), thus allowing the influx of ions at a rate equal to the biological ion-channels. Likewise sections of the pre-synaptic membrane were to be replaced by a section of inorganic membrane containing units that sense the presence of the neurotransmitter corresponding to the receptor being replaced, which were to be connected to a microcircuit programmed to elicit specific changes (i.e., increase or decrease in ionic permeability, such as through increasing or decreasing the diameter of ion-channels—e.g., through an increase or decrease in electric stimulation of piezoelectric crystals, as described above—or an increase or decrease in the number of open channels) corresponding to the change in postsynaptic potential in the biological membrane resulting from postsynaptic receptor-binding. This requires a bit more technological infrastructure than I anticipated the ion-channels requiring.

While the accurate and active detection of particular types and relative quantities of neurotransmitters is normally ligand-gated, we have a variety of potential, mutually exclusive approaches. For ligand-based receptors, sensing the presence and steepness of electrochemical gradients may not suffice. However, we don’t necessarily have to use ligand-receptor fitting to replicate the functionality of ligand-based receptors. If there is a difference in the charge (i.e., valence) between the neurotransmitter needing to be detected and other neurotransmitters, and the degree of that difference is detectable given the precision of our sensing technologies, then a means of sensing a specific charge may prove sufficient. I developed an alternate method for ligand-based receptor fitting in the event that sensing-electric charge proved insufficient, however. Different chemicals (e.g., neurotransmitters, but also potentially electrolyte solutions) have different volume-to-weight ratios. We equip the artificial-membrane sections with an empty compartment capable of measuring the weight of its contents. Since the volume of the container is already known, this would allow us to identify specific neurotransmitters (or other relevant molecules and compounds) based on their unique weight-to-volume ratio. By operatively connecting the unit’s CPU to this sensor, we can program specific operations (i.e., receptor opens allowing entry for fixed amount of time, or remains closed) in response to the detection of specific neurotransmitters. Though it is unlikely to be necessitated, this method could also work for the detection of specific ions, and thus could work as the operating mechanism underlying the artificial ion-channels as well—though this would probably require higher-precision volume-to-weight comparison than is required for neurotransmitters.

Sectional Integration with Biological Neurons

Integrating replacement-membrane sections with adjacent sections of the existing lipid bilayer membrane becomes a lot less problematic if the scale at which the membrane sections are handled (determined by the size of the replacement membrane sections) is homogenous, as in the case of biological tissues, rather than molecularly heterogeneous—that is, if we are affixing the edges to a biological tissue, rather than to complexes of individual lipid molecules. Reasons for hypothesizing a higher probability for homogeneity at the replacement scale include (a) the ability of experimenters and medical researchers to puncture the neuronal membrane with a micropipette (so as to measure membrane voltage) without rupturing the membrane beyond functionality, and (b) the fact that sodium and potassium ions do not leak through the gaps between the individual bilipid molecules, which would be present if it were heterogeneous at this scale. If we find homogeneity at the scale of sectional replacement, we can use more normative means of affixing the edges of the replacement membrane section with the existing lipid bilayer membrane, such as micromechanical fasteners, adhesive, or fusing via heating or energizing. However, I also developed an approach applicable if the scale of sectional replacement was found to be molecular and thus heterogeneous. We find an intermediate chemical that stably bonds to both the bilipid molecules constituting the membrane and the molecules or compounds constituting the artificial membrane section. Note that if the molecules or compounds constituting either must be energized so as to put them in an abnormal (i.e., unstable) energy state to make them susceptible to bonding, this is fine so long as the energies don’t reach levels damaging to the biological cell (or if such energies could be absorbed prior to impinging upon or otherwise damaging the biological cell). If such an intermediate molecule or compound cannot be found, a second intermediate chemical that stably bonds with two alternate and secondary intermediate molecules (which themselves bond to either the biological membrane or the non-biological membrane section, respectively) can be used. The chances of finding a sequence of chemicals that stably bond (i.e., a given chemical forms stable bonds with the preceding and succeeding chemicals in the sequence) increases in proportion to the number of intermediate chemicals used. Note that it might be possible to apply constant external energization to certain molecules so as to force them to bond in the case that a stable bond cannot be formed, but this would probably be economically prohibitive and potentially dangerous, depending on the levels of energy and energization-precision.

I also worked on the means of constructing and integrating these components in vivo, using MEMS or NEMS. Most of the developments in this regard are described in the next chapter. However, some specific variations on construction procedure were necessitated by the sectional-integration procedure, which I will comment on here. The integration unit would position itself above the membrane section. Using the data acquired by the neuron data-measurement units, which specify the constituents of a given membrane section and assign it a number corresponding to a type of artificial-membrane section in the integration unit’s section-inventory (essentially a store of stacked artificial-membrane-sections). A means of disconnecting a section of lipid bilayer membrane from the biological neuron is depressed. This could be a hollow rectangular compartment with edges that sever the lipid bilayer membrane via force (e.g., edges terminate in blades), energy (e.g., edges terminate in heat elements), or chemical corrosion (e.g., edges coated with or secrete a corrosive substance). The detached section of lipid bilayer membrane is then lifted out and compacted, to be drawn into a separate compartment for storing waste organic materials. The artificial-membrane section is subsequently transported down through the same compartment. Since it is perpendicular to the face of the container, moving the section down through the compartment should force the intra-cellular fluid (which would have presumably leaked into the constructional container’s internal area when the lipid bilayer membrane-section was removed) back into the cell. Once the artificial-membrane section is in place, the preferred integration method is applied.

Sub-neuronal (i.e., sectional) replacement also necessitates that any dynamic patterns of polarization (e.g., an action potential) are continuated during the interval of time between section removal and artificial-section integration. This was to be achieved by chemical sensors (that detect membrane depolarization) operatively connected to actuators that manipulate ionic concentration on the other side of the membrane gap via the release or uptake of ions from biochemical inventories so as to induce membrane depolarization on the opposite side of the membrane gap at the right time. Such techniques as partially freezing the cell so as to slow the rate of membrane depolarization and/or the propagation velocity of action potentials were also considered.

The next chapter describes my continued work in 2008, focusing on (a) the design requirements for replicating the neural plasticity necessary for memory and subjectivity, (b) the active and conscious modulation and modification of neural operation, (c) wireless synaptic transmission, (d) on ways to integrate new neural networks (i.e., mental amplification and augmentation) without disrupting the operation of existing neural networks and regions, and (e) a gradual transition from or intermediary phase between the physical (i.e., prosthetic) approach and the informational (i.e., computational, or mind-uploading proper) approach.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Churchland, P. S. (1989). Neurophilosophy: Toward a Unified Science of the Mind/Brain.  MIT Press, p. 30.

Pribram, K. H. (1971). Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology. New York: Prentice Hall/Brandon House.

Thoughts on Zoltan Istvan’s “The Transhumanist Wager”: A Review – Article by G. Stolyarov II

Thoughts on Zoltan Istvan’s “The Transhumanist Wager”: A Review – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
May 18, 2013
******************************

Zoltan Istvan’s new novel The Transhumanist Wager has been compared to Ayn Rand’s Atlas Shrugged. (See, for instance, Giulio Prisco’s review.) But to what extent are the books alike, and in what respects? To be sure, the story and the writing style are gripping, the characters are vivid, and the universe created by Istvan gave me an experience highly reminiscent of my reading of Atlas Shrugged more than a decade ago. Even this alone allows me to highly recommend The Transhumanist Wager as a work of literary art – a philosophical thriller. Moreover, the didactic purpose of the novel, its interplay of clearly identified good and evil forces, and its culmination in an extensive speech where the protagonist elaborates on his philosophical principles (as well as its punctuation by multiple smaller speeches throughout) provide clear parallels to Atlas Shrugged.

Giulio Prisco calls the philosophy of The Transhumanist Wager’s protagonist, Jethro Knights, “an extreme, militant version of the radically libertarian formulation of transhumanism”. However, this is the area where I perceive the most significant departure from the parallels to Atlas Shrugged. Ayn Rand’s philosophy of Objectivism (which she did not like to be called “libertarian”, though it was in essence) has the principle of individual rights and the rejection of the initiation of force at its ethical core. Galt’s Gulch in Atlas Shrugged was formed by a withdrawal of the great thinkers and creators from the world of those who exploited and enslaved them. However, there was no active conquest of that world by Rand’s heroes; rather, without the men of the mind, the power structures of the world simply fell apart on their own accord.

Jethro Knights creates his own seasteading nation, Transhumania, a fascinating haven for innovation and a refuge for transhumanist scientists oppressed by their governments and targeted by religious fundamentalist terrorism. The concept of an autonomous bastion of innovation is timely and promising; it was echoed by the recent statements from Larry Page of Google in favor of setting aside a part of the world to allow for unbridled experimentation. Transhumania, due to its technological superiority, spectacularly beats back a hostile invasion by the combined navies of the world. It is when the Transhumanians go on the offensive that the parallels to Galt’s Gulch cease. Instead of letting the non-transhumanist world crumble or embrace transhumanism on its own accord, Jethro Knights conquers it, destroys all of its political, religious, and cultural centerpieces, and establishes a worldwide dictatorship – including some highly non-libertarian elements, such as compulsory education, restrictions on reproduction, and an espousal of the view that even some human beings who have not initiated force may not have an inviolate right to their lives, but are rather judged on their “usefulness” – however defined (perhaps, in the case of Transhumania, usefulness in advancing the transhumanist vision as understood by Jethro Knights). Jethro Knights permits a certain degree of freedom – enough to sustain technological progress, high standards of living, and due process in the resolution of everyday disputes – but, ultimately, all of the liberties in Transhumania are contingent on their compatibility with Jethro’s own philosophy; they are not recognized as absolute rights even for those who disagree. John Galt would have been gentler. He would have simply withdrawn his support from those who would not deal with him as honest creators of value, but he would have left them to their own devices otherwise, unless they initiated force against him and against other rational creators of value.

The outcome of The Transhumanist Wager is complicated by the fact that Jethro’s militancy is the direct response to the horrific acts of terrorism committed by religious fundamentalists at the behest of Reverend Belinas, who also has considerable behind-the-scenes influence on the US government in the novel. Clearly, the anti-transhumanists were the initiators of force for the majority of the novel, and, so long as they perpetrated acts of violence against pro-technology scientists and philosophers, they were valid targets for retaliation and neutralization – just like all terrorists and murderers are. For the majority of the book, I was, without question, on Jethro’s side when it came to his practice, though not always his theory – but it was upon reading about the offensive phase of his war that I came to differ in both, especially since Transhumania had the technological capacity to surgically eliminate only those who directly attacked it or masterminded such attacks, thereafter leaving the rest of the world powerless to destroy Transhumania, but also free to come to recognize the merits of radical life extension and general technological progress on its own in a less jarring, perhaps more gradual process. An alternative scenario to the novel’s ending could have been a series of political upheavals in the old nations of the world, where the leaders who had targeted transhumanist scientists were recognized to be thoroughly wasteful and destructive, and were replaced by neutral or techno-progressive politicians who, partly for pragmatic reasons and partly arising out of their own attraction to technology, decided to trade with Transhumania instead of waging war on it.

Jethro’s concept of the “omnipotender” is a vision of the individual seeking as much power as he can get, ultimately aiming to achieve power over the entire universe. It is not clear whether power in this vision means simply the ability to achieve one’s objectives, or control in a hierarchical sense, which necessarily involves the subordination of other intelligent beings. I support power in the sense of the taming of the wilderness and the empowerment of the self for the sake of life’s betterment, but not in the sense of depriving others of a similar prerogative. Ayn Rand’s vision of the proper rationally egoistic outlook is extremely clear on the point that one must neither sacrifice oneself to others nor sacrifice others to oneself. Istvan’s numerous critical references to altruism and collectivism clearly express his agreement with the first half of that maxim – but what about the second? Jethro’s statements that he would be ready to sacrifice the lives of even those closest to him in order to achieve his transhumanist vision certainly suggest that the character of Jethro might not give others the same sphere of inviolate action that he would seek for himself. Of course, Jethro also dismisses as a contrived hypothetical the suggestion that such sacrifice would be necessary (at least, in Jethro’s view, for the time being), and I agree. Yet a more satisfying response would have been not that he is ready to make such a sacrifice, but that the sacrifice itself is absolutely not required for individual advancement by the laws of reality, and therefore it is nonsensical to even acknowledge its possibility. Jethro gave his archenemy, Belinas, far too much of a philosophical concession by even picking sides in the false dichotomy between self-sacrifice to others and the subjugation of others to oneself.

Perhaps the best way to view The Transhumanist Wager is as a cautionary tale of what might happen if the enemies of technological progress and radical life extension begin to forcefully clamp down on the scientists who try to make these breakthroughs happen. A climate of violence and terror, rather than civil discourse and an embrace of life-enhancing progress, will breed societal interactions that follow entirely different rules, and produce entirely different incentives, from those which allow a civilized society to smoothly function and advance. I hope that we, at least in the Western world, can avoid a scenario where those different rules and incentives take hold.

I am a transhumanist, but I am also a humanist, in the sense that I see the advancement of humanity and the improvement of the human condition as the desired aims of technological progress. In this sense, I am fond of the reference to the goal of transhumanists as the achievement of a “humanity plus”. Transhumanism is and ought to be, fundamentally, a continuation of the melioristic drive of the 18th-century Enlightenment, ridding man of the limitations and terrible sufferings which have historically been considered part of necessary “human nature” but which are, in reality, the outcome of the contingent material shortcomings with which our species happened to be burdened from its inception. Will it be possible to entice and persuade enough people to embrace the transhumanist vision voluntarily? I certainly hope so, since even a sizable minority of individuals would suffice to drive forward the technological advances which the rest of humanity would embrace for other, non-philosophical reasons.

In the absence of a full-fledged embrace of this humanistic vision of transhumanism, at the very least I hope that it would be possible to “sneak around” the common objections and restrictions and achieve a technological fait accompli through the dissemination of philosophically neutral tools, such as the Internet and mobile devices, that enhance individual opportunities and alter the balance of power between individuals and institutions. In this possible future, some of the old “cultural baggage” – as Jethro would refer to it – would most likely remain – including religions, which are among the hardest cultural elements for people to give up. However, this “baggage” itself would gradually evolve in its essential outlook and impact upon the world, much like Western Christianity today is far gentler than the Christianity of the 3rd, 11th, or 17th centuries. Perhaps, instead of fighting transhumanism, some representatives of old cultural labels will attempt to preserve their own relevance amidst transhuman-oriented developments. This will require reinterpreting doctrines, and will certainly engender fierce debate within many religious, political, and societal circles. However, there may yet be hope that the progressive wings of each of these old institutions and ideologies (“progressive” in the sense of being open to progress, not to be mistaken for any current partisan affiliation) will do the equivalent work to that entailed in a transhumanist revolution, except in a gradual, peaceful, seamless manner.

Yet, on the other hand, the immense urgency of achieving life extension is, without question, a sentiment I strongly identify with. Jethro’s experience, early in the novel, of stepping on a defective mine has autobiographical parallels to Istvan’s own experience in Vietnam. A brush with death certainly highlights the fragility of life and the urgency of pursuing its continuation. Pausing to contemplate that, were it not for a stroke of luck at some prior moment, one could be dead now – and all of the vivid and precious experiences one is having could one day be snuffed out, with not even a memory remaining – certainly motivates one to think about what the most direct, the most effective means of averting such a horrific outcome would be. Will a gradual, humane, humanistic transition to a world of indefinite life extension work out in time for us? What can we do to make it happen sooner? Can we do it within the framework of the principles of libertarianism in addition to those of transhumanism? Which approaches are the most promising at present, and which, on the other hand, could be counterproductive? How do we attempt to enlist the help of the “mainstream” world while avoiding or overcoming its opposition? For me, reading The Transhumanist Wager provided further impetus to keep asking these important, open, and as of yet unresolved questions – in the hopes that someday the ambition to achieve indefinite life extension in our lifetimes will give rise to a clear ultra-effective strategy that can put this most precious of all goals in sight.

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

What No One Wants to Hear About Benghazi – Article by Ron Paul

What No One Wants to Hear About Benghazi – Article by Ron Paul

The New Renaissance Hat
Ron Paul
May 18, 2013
******************************

Congressional hearings, White House damage control, endless op-eds, accusations, and defensive denials. Controversy over the events in Benghazi last September took center stage in Washington and elsewhere last week. However, the whole discussion is again more of a sideshow. Each side seeks to score political points instead of asking the real questions about the attack on the US facility, which resulted in the death of US Ambassador Chris Stevens and three other Americans.

Republicans smell a political opportunity over evidence that the Administration heavily edited initial intelligence community talking points about the attack to remove or soften anything that might reflect badly on the president or the State Department.

Are we are supposed to be shocked by such behavior? Are we supposed to forget that this kind of whitewashing of facts is standard operating procedure when it comes to the US government?

Democrats in Congress have offered the even less convincing explanation for Benghazi, that somehow the attack occurred due to Republican-sponsored cuts in the security budget at facilities overseas. With a one- trillion-dollar military budget, it is hard to take this seriously.

It appears that the Administration scrubbed initial intelligence reports of references to extremist Islamist involvement in the attacks, preferring to craft a lie that the demonstrations were a spontaneous response to an anti-Islamic video that developed into a full-out attack on the US outpost.

Who can blame the administration for wanting to shift the focus? The Islamic radicals who attacked Benghazi were the same people let loose by the US-led attack on Libya. They were the rebels on whose behalf the US overthrew the Libyan government. Ambassador Stevens was slain by the same Islamic radicals he personally assisted just over one year earlier.

But the Republicans in Congress also want to shift the blame. They supported the Obama Administration’s policy of bombing Libya and overthrowing its government. They also repeated the same manufactured claims that Gaddafi was “killing his own people” and was about to commit mass genocide if he were not stopped. Republicans want to draw attention to the President’s editing of talking points in hopes no one will notice that if the attack on Libya they supported had not taken place, Ambassador Stevens would be alive today.

Neither side wants to talk about the real lesson of Benghazi: interventionism always carries with it unintended consequences. The US attack on Libya led to the unleashing of Islamist radicals in Libya. These radicals have destroyed the country, murdered thousands, and killed the US ambassador. Some of these then turned their attention to Mali, which required another intervention by the US and France.

Previously secure weapons in Libya flooded the region after the US attack, with many of them going to Islamist radicals who make up the majority of those fighting to overthrow the government in Syria. The US government has intervened in the Syrian conflict on behalf of the same rebels it assisted in the Libya conflict, likely helping with the weapons transfers. With word out that these rebels are mostly affiliated with al Qaeda, the US is now intervening to persuade some factions of the Syrian rebels to kill other factions before completing the task of ousting the Syrian government. It is the dizzying cycle of interventionism.

The real lesson of Benghazi will not be learned because neither Republicans nor Democrats want to hear it. But it is our interventionist foreign policy and its unintended consequences that have created these problems, including the attack and murder of Ambassador Stevens. The disputed talking points and White House whitewashing are just a sideshow.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission.

Futile Temporary Totalitarianism in Boston – Video by G. Stolyarov II

Futile Temporary Totalitarianism in Boston – Video by G. Stolyarov II

The aftermath of the Boston Marathon bombings of April 15, 2013, showed all too clearly that totalitarianism does not need decades of incremental legislation and regimentation to come to this country. All it needs is the now-pervasive fear of “terrorism” – a fear which can give one man the power to shut down the economic life of an entire city for a day.

This video is based on Mr. Stolyarov’s recent essay, “Futile Temporary Totalitarianism in Boston“.

References

-“U.S. Cities With Bigger Economies Than Entire Countries” – Wall Street Journal – July 20, 2012
– “Adding up the financial costs of the Boston bombings” – Bill Dedman and John Schoen, NBC News – April 30, 2013
– “United Airlines Flight 93” – Wikipedia
– “Richard Reid” – Wikipedia
– “Umar Farouk Abdulmutallab” – Wikipedia
– “Homicides decrease in Boston for third straight year” – Matt Carroll, The Boston Globe – January 1, 2013
– “List of motor vehicle deaths in U.S. by year” – Wikipedia
– “How Scared of Terrorism Should You Be?” – Ronald Bailey, Reason Magazine – September 6, 2011
– “Terrorism Risk Insurance Act” – Wikipedia
– “Business Frets at Terrorism Tag of Marathon Attack” – Associated Press – May 13, 2013
– “TIME/CNN Poll Shows Increasing Number Of Americans Won’t Give Up Civil Liberties To Fight Terrorism” – Tim Cushing, TechDirt – May 6, 2013

Illiberal Belief #17: Democracy is a Cure-All – Article by Bradley Doucet

Illiberal Belief #17: Democracy is a Cure-All – Article by Bradley Doucet

The New Renaissance Hat
Bradley Doucet
May 14, 2013
******************************
I know it is sacrilege, but that is all the more reason to say it, and say it loud: Democracy is not the be-all, end-all, Holy Grail of politics that many imagine it to be. It is one, but only one, of the ingredients that make for good societies, and it is far from the most important one. Why point this out? If democracy is a good thing, why stir controversy by questioning just how good? Because the widespread, quasi-religious devotion to democracy in evidence today has some very nasty consequences. Democracy means “rule by the people.” The people usually rule by electing representatives, a process which is called, simply enough, representative democracy. Sometimes, as in the case of a referendum on a specific question, the people rule more directly, and this is known as direct democracy. Actually, though, “rule by the people” is a bit misleading, since “the people” are never unanimous on any given question, and neither are their chosen representatives. In practice, democracy is rule by majority (i.e., 50% + 1), or even mere plurality (i.e., more than any one other candidate but less than half) when three or more candidates compete.
***

Long before any nation had experienced anything even approaching universal suffrage, people concerned with human liberty—thinkers like Alexis de Tocqueville and John Stuart Mill—expressed concerns that the fading tyranny of kings might merely be replaced by a “tyranny of the majority.” They worried that majorities might vote away minorities’ hard-won rights to property, freedom of religion, freedom of expression, and freedom of movement. Majorities with a hate on for certain minorities might even vote away their very right to life.

History has given these worries ample justification. Democracy by itself is no guarantee of peace and freedom. Adolf Hitler’s victory in democratic 1930s Germany is only the most glaring example of popular support for an illiberal, anti-human regime. The people of Latin America have a long and hallowed tradition of rallying behind populist strongmen who repay their fealty by grinding them (or sometimes their neighbours) beneath their boot heels, all the while running their economies into the ground. Their counterparts in post-colonial Africa and certain parts of Asia have shown similarly stellar political acumen.

As writers like Fareed Zakaria (The Future of Freedom: Illiberal Democracy at Home and Abroad) point out, in those parts of the world that have successfully achieved a respectable degree of freedom and prosperity (basically Europe, the Anglosphere, and Japan and the Asian Tigers), sheer democracy has been supplemented—and preceded—by institutions like the rule of law, including an independent judiciary; secure property rights; the separation of church and state; freedom of the press; and an educated middle class. Indeed, instead of supplementing democracy, it is more accurate to say that these institutions limit the things over which the people can rule. It is enshrined in law and tradition that neither the people nor their representatives shall be above the law, violate the lives or property of others, impose their religious beliefs on others, or censor the freedom of the press. These checks on the power of the people have created, in the most successful parts of the world, not just democracies but liberal democracies.

According to Zakaria, societies that democratize before having built up these liberal institutions and the prosperity they engender are practically doomed to see their situations deteriorate instead of improve, often to the detriment of neighbouring countries, too. Liberty is simply more important than democracy, and must come first. We who are fortunate enough to live in liberal democracies would do well to remember this when judging other nations, like China, and urging them to democratize faster.

We would do well to remember it when thinking about our own societies, too. Thinkers like economist Bryan Caplan, author of The Myth of the Rational Voter: Why Democracies Choose Bad Policies, argue that even in the most liberal countries, democracy often works against liberty. Economists have been saying for a few decades now that political ignorance is an intractable problem that undermines the beneficial effects of democracy. The argument is that since a single vote has practically no chance of affecting the outcome of an election (or a referendum), the average voter has no incentive to become informed. Defenders of democracy have replied that ignorance doesn’t matter, since the ignorant essentially vote randomly, and random ignorant votes in one direction will be cancelled out by random ignorant votes in the opposite direction, leaving the well-informed in the driver’s seat.

Caplan agrees that if average voters were merely ignorant, their votes would cancel each other out, and the well-informed would be in charge and make good decisions. His central insight, though, is that voters are not merely ignorant, but irrational to boot. Voters have systematically biased beliefs, to which they are deeply attached, and those biases do not cancel each other out. Specifically, the average voter underestimates how well markets work; underestimates the benefits of dealing with foreigners; focuses on the short-term pain of job losses instead of the long-term gain of productivity increases; and tends at any given time to be overly pessimistic about the economy. These biases lead voters to support candidates and policies that undermine their own best interests.

The alternative to democracy, Caplan emphasizes, is not dictatorship, but markets. The market is not perfect, but it works a lot better than politics, because in my daily life as a producer and a consumer, I have an obvious incentive to be rational: my pocketbook. This incentive is lacking when it comes time to go to the polls, because of the aforementioned near-impossibility that my vote will determine the outcome. Given this asymmetry, we should favour markets over politics whenever possible. For those things that must be decided collectively, democracy may be the best we can do, but we should strive to decide as many things as possible privately, resorting to politics only when no other option is feasible. In other words, we should recapture the wisdom of the American Founding Fathers, rediscover the genius of constitutionally limited democracy, and reclaim some of the liberty previous generations fought so valiantly to secure. If we don’t, it might not be too much longer, in the grand scheme of things, before the Western world ceases to be a model worth emulating.

Bradley Doucet is Le Quebecois Libré‘s English Editor. A writer living in Montreal, he has studied philosophy and economics, and is currently completing a novel on the pursuit of happiness.
Futile Temporary Totalitarianism in Boston – Article by G. Stolyarov II

Futile Temporary Totalitarianism in Boston – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
May 13, 2013
******************************

Everyday life in the United States is still semi-free most of the time, if one goes about one’s own business and avoids flying or crossing the border. Yet, the aftermath of the Boston Marathon bombings of April 15, 2013, showed all too clearly that totalitarianism does not need decades of incremental legislation and regimentation to come to this country. All it needs is the now-pervasive fear of “terrorism” – a fear which can give one man the power to shut down the economic life of an entire city for a day.

The annual Gross Domestic Product of Boston is approximately $326 billion (based on 2011 figures from the Wall Street Journal). For one day, Boston’s GDP can be roughly estimated as ($326 billion)/365 = $893.15 million. Making the rather conservative assumption that only about half of a city’s economic activity would require people to leave their homes in any way, one can estimate the economic losses due to the Boston lockdown to be around $447 million. By contrast, how much damaged property and medical costs resulted directly from the criminal act committed by the Chechen nationalist and Islamic fundamentalist brothers Tamerlan and Dzokhar Tsarnaev? An NBC News article detailing the economic damages from the bombing estimates total medical costs to be in excess of $9 million, while total losses within the “impact zone” designated by the Boston Police Department are about $10 million. To give us a wide margin of error again, let us double these estimates and assume that the bombers inflicted total economic damage of $38 million. The economic damage done by the lockdown would still exceed this total by a factor of about 11.76 – more than an order of magnitude!

It is true, of course, that the cost in terms of the length and quality of life for the three people killed and the 264 people injured by the bombings cannot be accounted for in monetary terms. But I wonder: how many years of life will $447 million in lost economic gains deprive from the population of Boston put together – especially when one considers that these economic losses affect life-sustaining sectors such as medical care and pharmaceuticals? Furthermore, to what extent would this lost productivity forestall the advent of future advances that could have lengthened people’s lives one day sooner? One will most likely never know, but the reality of opportunity cost is nonetheless always with us, and surely, some massive opportunity costs were incurred during the Boston lockdown.  Moreover, one type of damage does not justify or excuse another. However horrific the Boston bombings were, they were not a reason to further hinder innocent people.

Bad policy is the surest and most powerful ally of malicious, hate-driven miscreants like the Tsarnaev brothers. On April 19, the day of the lockdown, Dzokhar Tsarnaev, the sole surviving Boston Marathon bomber, hid inside a boat in a private backyard, incapacitated and nearly dead from a botched suicide attempt. Dzokhar wanted only to end his own life, and yet he could never have caused more trouble than he did during those hours, because, while the lockdown was in place, bad policy was inflicting more economic damage than the Tsarnaev brothers’ crude and clumsy attack could ever have unleashed on its own.

Only after the lockdown was lifted could a private citizen, David Henneberry, leave his house and notice that his boat had a loose cover. As Thomas Jefferson would have told the Bostonians, the price of liberty is eternal vigilance. Virtually every time malicious plots against innocent civilians are actually foiled – be it the takedown of United Airlines Flight 93 or the arrests of attempted “shoe bomber” Richard Reid and “underwear bomber” Umar Farouk Abdulmutallab – it is the vigilance of ordinary but courageous individuals that truly enhances the safety of us all.  Policies that create martial law, prevent people from leading their lives, and result in SWAT-style “sweeps” of people’s homes in search of a single individual not only do nothing to actually help capture the violent wrongdoer, but also subvert the liberty, prosperity, and quality of life for many orders of magnitude more people than any criminal cell could ever hope to undermine on its own.

Would any other dangerous condition, one not thought to be “terrorism,” ever provoke such a wildly disproportionate and oppressive reaction? Consider that Boston had 58 homicides in the year 2012. Many cities’ murder rates are much higher, sometimes reaching an average of one murder per day. Was a lockdown initiated for every third homicide in any American city? Traffic fatalities claim over 30,000 lives in the United States every year – or 10,000 times the death toll of the Boston Marathon bombing, and ten times the death toll of even the terrorist attacks of September 11, 2001. Are entire neighborhoods shut down every time there is a deadly car crash? If this were the accepted practice, all economic life – indeed most life in general – in the United States would grind to a halt.  Yet, while the most likely and widespread threats to our lives come from very mundane sources, bad policies and distorted public perceptions of risk are motivated by fear of the unusual, the grotesque, the sensational and sensationalized kinds of death. And yet, in spite of fear-mongering by politicians, the media, special interests, and those who rely exclusively on sound bites, the threat to one’s personal safety from a terrorist act is so minuscule as to safely be ignored. In fact, as Ronald Bailey of Reason Magazine discusses, the odds of being killed by a lightning bolt are about four times greater!

 Ironically enough, the very act that precipitated the Boston lockdown might not even officially be designated a terrorist act after all. If you thought that this was because politicians are suddenly coming to their senses, think again. The real reason is somewhat less intuitive and relates to insurance coverage for the businesses damaged by the attacks. Most commercial property and business-interruption insurance policies will cover losses from criminal acts, but explicitly exclude coverage for acts of terrorism, unless the business purchases special terrorism coverage reinsured by the federal Terrorism Risk Insurance Program. However, for the terrorism exclusions in many ordinary commercial insurance policies to apply, an act of terrorism has to be formally certified as such by the Secretary of the Treasury (and sometimes other officials, such as the Attorney General and the Secretary of Homeland Security). (For more details on this turn of events, read “Business Frets at Terrorism Tag of Marathon Attack” by the Associated Press.) The affected businesses really do not want the bombings to be formally classified as terrorism, as this will impede the businesses’ ability to obtain the insurance proceeds which would be integral to their recovery.

 I have no objection to the federal government refraining from certifying the bombings as a terrorist act in an effort to avoid needless bureaucratic complications that would impede recovery. However, I also detest Orwellian doublethink. If the bombings are not terrorism for one purpose, then they cannot be terrorism in any other sense. If they will not be used to justify depriving businesses of insurance proceeds, then surely they must not be used to deprive the rest of us of our freedom to move about as we wish, to pursue our economic aspirations, to retain the privacy of our homes, and to otherwise lead our lives in peace. If the bombings are not certified as terrorism, then all fear-mongering rhetoric by federal politicians about the need to heighten “security” in response to this “terrorist” act should cease as well. The law of non-contradiction is one type of law that our politicians – and the people of the United States more generally – urgently need to recognize.

I certainly hope that no future bombings of public events occur in the United States, not only out of a desire to preserve the lives of my fellow human beings, but also out of grave concern for the possibly totalitarian reaction that would follow any such heinous act. I enjoy living in peace and relative freedom day to day, but I know that it is only by the grace and perhaps the laziness of America’s political masters that I am able to do so. I continue to hope for an amazing run of good luck with regard to the non-occurrence of any particularly visible instances of mass crime, so that the people of the United States can find the time to gradually become enlightened about the real risks in their lives and the genuinely effective strategies for reducing those risks. There is hope that the American people are gradually regaining their common sense; perhaps they will drag the politicians toward reason with them – however reluctant the politicians might be to pursue sensible policies for a change.

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

 

Theme and Variations #1, Op. 61 (2009) – Video by G. Stolyarov II

Theme and Variations #1, Op. 61 (2009) – Video by G. Stolyarov II

This 2009 composition was written in a theme-and-variations format, with the main theme being presented, then varied five times, then repeated in its original form. The melody is played by a harpsichord with piano accompaniment, and a second harpsichord provides additional accompaniment in the first variation.

This composition has been remastered in Finale 2011 software and is played by two harpsichords and a piano.

Download the MP3 file of this composition here.

See the index of Mr. Stolyarov’s compositions, all available for free download, here.

The artwork is Mr. Stolyarov’s Abstract Orderism Fractal 25, available for download here and here.

Remember to LIKE, FAVORITE, and SHARE this video in order to spread rational high culture to others.

The Best Novels and Plays about Business: Results of a Survey – Article by Edward W. Younkins

The Best Novels and Plays about Business: Results of a Survey – Article by Edward W. Younkins

The New Renaissance Hat
Edward W. Younkins
May 10, 2013
******************************
My Koch Research Fellows, Jomana Krupinski and Kaitlyn Pytlak, and I conducted a survey of 250 Business and Economics professors and 250 English and Literature professors. Colleges and universities were randomly selected and then professors from the relevant departments were also randomly selected to receive our email survey. They were asked to list and rank from 1 to 10 what they considered to be the best novels and plays about business. We did not attempt to define the word “best”,  leaving that decision to each respondent. We obtained sixty-nine usable responses from Business and Economics professors and fifty-one from English and Literature professors. A list of fifty choices was given to each respondent and an opportunity was presented to vote for works not on the list. When tabulating the results, ten points were given to a novel or play in a respondent’s first position, nine points were assigned to a work in the second position, and so on, down to the tenth listed work, which was allotted one point. The table below presents the top twenty-five novels and plays for each group of professors. Interestingly, fifteen works made both top-25 lists. These are noted in bold type.
***

The Best Novels and Plays about Business

Business and Economics Professors
Points
English and Literature Professors
Points
1.   Atlas Shrugged, Ayn Rand
457
1.   Death of a Salesman, Arthur Miller
282
2.   The Fountainhead, Ayn Rand
297
2.   Bartleby: The Scrivener, Herman Melville
259
3.   The Great Gatsby, F. Scott Fitzgerald
216
3.   The Great Gatsby, F. Scott Fitzgerald
231
4.   Death of a Salesman, Arthur Miller
164
4.   The Jungle, Upton Sinclair
143
5.   Time Will Run Back, Henry Hazlitt
145
5.   Babbitt, Sinclair Lewis
126
6.   The Jungle, Upton Sinclair
136
6.   Glengarry Glen Ross, David Mamet
121
7.   The Gilded Age, Mark Twain and Charles Dudley Warner
95
7.   The Rise of Silas Lapham, William Dean Howells
98
8.   Glengarry Glen Ross, David Mamet
89
8.   American Pastoral, Philip Roth
85
9.   God Bless You, Mr. Rosewater, Kurt Vonnegut, Jr.
57
9.   The Confidence Man, Herman Melville
75
10. Other People’s Money, Jerry Sterner
57
10. The Fountainhead, Ayn Rand
75
11. Bartleby: The Scrivener, Herman Melville
55
11. A Hazard of New Fortunes, William Dean Howells
66
12. A Man in Full, Tom Wolfe
48
12. The Octopus, Frank Norris
65
13. Babbitt, Sinclair Lewis
47
13. Atlas Shrugged, Ayn Rand
62
14. The Man in the Gray Flannel Suit, Sloan Wilson
43
14. Nice Work, David Lodge
62
15. Rabbit is Rich, John Updike
41
15. The Big Money, John Dos Passos
59
16. Major Barbara, George Bernard Shaw
39
16. The Gilded Age, Mark Twain and Charles Dudley Marner
58
17. Dombey and Son, Charles Dickens
33
17. Rabbit is Rich, John Updike
55
18. The Goal, Eliyahu M. Goldratt
33
18. Seize the Day, Saul Bellow
55
19. The Driver, Garet Garrett
32
19. Mildred Pierce, James M. Gain
54
20. Executive Suite, Cameron Hawley
32
20. The Financier, Theodore Dreiser
53
21. The Way We Live Now, Anthony Trollope
32
21. Dombey and Son, Charles Dickens
51
22. American Pastoral, Philip Roth
29
22. Sometimes a Great Notion, Ken Kesey
45
23. The Octopus, Frank Norris
29
23. The Last Tycoon, F. Scott Fitzgerald
44
24. Sometimes a Great Notion, Ken Kesey
28
24. The Moviegoer, Walker Percy
43
25. North and South, Elizabeth Gaskell
27
25. God Bless You, Mr. Rosewater, Kurt Vonnegut, Jr.
39

 

Dr. Edward W. Younkins is Professor of Accountancy at Wheeling Jesuit University. He is the author of Capitalism and Commerce: Conceptual Foundations of Free Enterprise [Lexington Books, 2002], Philosophers of Capitalism: Menger, Mises, Rand, and Beyond [Lexington Books, 2005] (See Mr. Stolyarov’s review of this book.), and Flourishing and Happiness in a Free Society: Toward a Synthesis of Aristotelianism, Austrian Economics, and Ayn Rand’s Objectivism [Rowman & Littlefield Pub Incorporated, 2011] (See Mr. Stolyarov’s review of this book.). Many of Dr. Younkins’s essays can be found online at his web page at www.quebecoislibre.org. You can contact Dr. Younkins at younkins@wju.edu