Browsed by
Tag: information

Happy Future Day! – Article by Edward Hudgins

Happy Future Day! – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
******************************

Stand up for optimism about the future today!

Transhumanism Australia, a non-profit that promotes education in science and technology, has marked March 1 as “Future Day.” It wants this day celebrated worldwide as a time “to consider the future of humanity.” If all of us made a habit of celebrating our potential, it could transform a global culture mired in pessimism and malaise. It would help build an optimistic world that is confident about what humans can accomplish if we put our minds and imaginations to it.

The Future is Bright

The information and communications technology that helps define and shape our world was, 40 years ago, a vision of the future brought into present reality by visionaries like Steve Jobs and Bill Gates. The exponential growth of the power of semiconductors allowed entrepreneurs to create one new industry and cutting-edge good product and service after another.

futureToday, we are at exponential takeoff points in biotech, nanotech, and artificial intelligence. For example, the cost of sequencing a human genome was $100 million in 2001, $10 million in 2007, but it costs only a few thousand dollars today. Steve Jobs created the first Apple computers in his garage. Biohackers similarly housed could transform our lives in the future in ways that still seem to most folks like science fiction; indeed, the prospect of “curing death” is no longer a delusion of madmen but the well-funded research projects in the laboratories of the present.

For a prosperous present and promising future a society needs physical infrastructure—roads, power, communications. It needs a legal infrastructure—laws and political structures that protect the liberty of individuals so they can act freely and flourish in civil society. And it requires moral infrastructure, a culture that promotes the values of reason and individual productive achievement.

Future “Future Days”

We should congratulate our brothers “Down Under” for conceiving of Future Day. They have celebrated it in Sydney with a conference on the science that will produce a bright tomorrow. We in America and folks around the world should build on this idea. Today it’s a neat idea: next year, we could start a powerful tradition, a global Future or Human Achievement Day, promoting the bright future that could be.

Were such a day marked in every school and every media outlet, it could to raise achiever consciousness. It could celebrate achievement in the culture—who invented everything that makes up our world today, and how? It could promote achievement as a central value in the life of each individual, whether the individual is nurturing a child to maturity or a business to profitability, writing a song, poem, business plan or dissertation, laying the bricks to a building or designing it, or arranging for its financing.

Such a day would help create the moral infrastructure necessary for a prosperous, fantastic, non-fiction future, a world as it can be and should be, a world created by humans for humans—or even transhumans!

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright The Atlas Society. For more information, please visit www.atlassociety.org.

Ayn Rand and Friedrich A. Hayek: A Side-by-Side Comparison – Article by Edward W. Younkins

Ayn Rand and Friedrich A. Hayek: A Side-by-Side Comparison – Article by Edward W. Younkins

The New Renaissance HatEdward W. Younkins
August 1, 2015
******************************

Ayn Rand and Friedrich A. Hayek did more than any other writers in the Twentieth Century to turn intellectual opinion away from statism and toward a free society. Although they are opposed on many philosophical and social issues, they generally agree on the superiority of a free market. Rand’s defense of capitalism differs dramatically from Hayek’s explanation of the extended order. In addition, Hayek approves of state activity that violates Rand’s ideas of rights and freedom. The purpose of this brief essay is to describe, explain, and compare the ideas of these two influential thinkers. To do this, I present and explain an exhibit that provides a side-by-side summary of the differences between Rand and Hayek on a number of issues.

In their early years of writing, both Hayek and Rand were dismissed by intellectuals, but they were heralded by businessmen. Hayek began to gain some respect from intellectuals when he published The Road to Serfdom in 1944. He wrote a number of scholarly books, attained formal academic positions, and earned the Nobel Prize for economics in 1974. Rand never did write scholarly works or hold a formal academic position. Her philosophy must be extracted from her essays and her fiction.

Hayek was read in college classes sooner, and to a much greater extent, than was Rand. He was viewed by intellectuals as a responsible and respected scholar, and Rand was not. His vision of anti-statism was more acceptable to intellectuals because he called for some exceptions to laissez-faire capitalism. In his writings he permitted concessions for some state interventions. In his immense and varied body of work, he touched upon a great many fields, including anthropology, evolutionary biology, cognitive science, philosophy, economics, linguistics, political science, and intellectual history. During the last 25 years or so, Rand’s works have been increasingly studied by scholars. There is now an Ayn Rand Society affiliated with the American Philosophical Association and a scholarly publication devoted to the study of her ideas—The Journal of Ayn Rand Studies. In addition, her writings are now being covered in college classes.

A Summary Comparison

Exhibit I provides a summary comparison of Rand and Hayek based on a variety of factors and dimensions. With respect to metaphysics and epistemology, Rand holds that “A is A” and that reality is knowable. Contrariwise, Hayek argues that reality is unknowable and that what men see are distorted representations or reproductions of objects existing in the world. The skeptic Hayek goes so far as to state that the notion of things in themselves (i.e., the noumenal world) can be dismissed. Whereas Rand’s foundation is reality, the best that Hayek can offer as a foundation is words and language.

Hayek supports the view that the human mind must have a priori categories that are prior to, and responsible for the ability to perceive and interpret the external world. He adds to this Kantian view by making the case that each individual mind’s categories are restructured according to the distinct experiences of each particular person.   Each person’s neural connections can therefore be seen as semi-permanent and affected by his or her environment and experiences. The mind’s categories evolve as each specific person experiences the world. According to Hayek, there is pre-sensory knowledge embedded in the structure of the mind and the nervous system’s synaptic connections which can be further created and modified over time. For the neo-Kantian Hayek, knowledge always has a subjective quality.

Reason for Rand is active, volitional, and efficacious. It follows that she sees rationality as man’s primary virtue. She sees progress through science and technology as the result of the human ability to think conceptually and to analyze logically through induction and deduction. Rand also contends that people can develop objective concepts that correspond with reality.

In his philosophy, Hayek relegates reason to a minor role. He argues for a modest perspective of people’s reasoning capabilities. He contends that reason is passive and that it is a social product. Hayek’s message of intellectual humility is primarily aimed at constructivist rationalism rather than critical rationalism. As an “anti-rationalist,” he explained that the world is too complex for any government planner to intentionally design and construct society’s institutions. However, he is a proponent of the limited potential of critical rationalism through which individuals use local and tacit knowledge in their everyday decisions. Hayek views progress as a product of an ongoing dynamic evolutionary process. He said that we cannot know reality but we can analyze evolving words and language. Linguistic analysis and some limited empirical verification provide Hayek with somewhat of an analytical foundation. His coherence theory of concepts is based on agreement among minds. For Hayek, concepts happen to the mind. Of course, his overall theory of knowledge is that individuals know much more than can be expressed in words.

Rand makes a positive case for freedom based on the nature of man and the world. She explains that man’s distinctive nature is exhibited in his rational thinking and free will. Each person has the ability to think his own thoughts and control his own energies in his efforts to act according to those thoughts. People are rational beings with free wills who have the ability to fulfill their own life purposes, aims, and intentions. Rand holds that each individual person has moral significance. He or she exists, perceives, experiences, thinks and acts in and through his or her own body and therefore from unique points in time and space. It follows that the distinct individual person is the subject of value and the unit of social analysis. Each individual is responsible for thinking for himself, for acting on his own thoughts, and for achieving his own happiness.

Hayek denies the existence of free will. However, he explains that people act as if they have free will because they are never able to know how they are determined to act by various biological, cultural, and environmental factors. His negative case for freedom is based on the idea that no one person or government agency is able to master the complex multiplicity of elements needed to do so. Such relevant knowledge is never totally possessed by any one individual. There are too many circumstances and variables affecting a situation to take them all into account. His solution to this major problem is to permit people the “freedom” to pursue and employ the information they judge to be the most relevant to their chosen goals. For Hayek, freedom is good because it best promotes the growth of knowledge in society. Hayek explains that in ordering society we should depend as much as possible on spontaneous forces such as market prices and as little as possible on force. Acknowledging man’s socially-constructed nature, he does not view individuals as independent agents but rather as creatures of society.

According to Rand, the principle of man’s rights can be logically derived from man’s nature and needs. Rights are a moral concept. For Rand, the one fundamental right is a person’s right to his own life. She explains that rights are objective conceptual identifications of the factual requirements of a person’s life in a social context. A right is a moral principle that defines and sanctions one’s freedom of action in a social context. Discussion of individual rights are largely absent from Hayek’s writings. At most he says that rights are created by society through the mechanism of law.

Whereas Rand speaks of Objective Law, Hayek speaks of the Rule of Law. Objective laws must be clearly expressed in terms of essential principles. They must be objectively justifiable, impartial, consistent, and intelligible. Rand explains that objective law is derived from the rational principle of individual rights. Objective Law deals with the specific requirements of a man’s life. Individuals must know in advance what the law forbids them from doing, what constitutes a violation, and what penalty would be incurred if they break the law. Hayek says that the Rule of Law is the opposite of arbitrary government. The Rule of Law holds that government coercion must be limited by known, general, and abstract rules. According to Hayek certain abstract rules of conduct came into being because groups who adopted them became better able to survive and prosper. These rules are universally applicable to everyone and maintain a sphere of responsibility.

Rand espouses a rational objective morality based on reason and egoism. In her biocentric ethics, moral behavior is judged in relation to achieving specific ends with the final end being an individual’s life, flourishing, and happiness. For Hayek, ethics is based on evolution and emotions. Ethics for Hayek are functions of biology and socialization. They are formed through habits and imitation.

Rand advocates a social system of laissez-faire capitalism in which the sole function of the state is the protection of individual rights. Hayek, or the other hand, allows for certain exceptions and interventions to make things work. He holds that it is acceptable for the government to supply public goods and a safety net.

For Rand, the consciousness of the individual human person is the highest level of mental functioning. For Hayek, it is a supra-conscious framework of neural connections through which conscious mental activity gains meaning. He states that this meta-conscious mechanism is taken for granted by human beings. The set of a person’s physiological impulses forms what Hayek calls the sensory order. Perception and pattern recognition follow one’s sensory order which is altered by a person’s own perception and history of experiences

Aristotle is Rand’s only acknowledged philosophical influence. They both contend that to make life fully human (i.e., to flourish), an individual must acquire virtues and make use of his reason as fully as he is capable. Hayek was influenced by Kant and Popper in epistemology, Ferguson and Smith in evolutionary theory, Hume in ethics, and Wittgenstein in linguistics.

Although Rand and Hayek are opposed on many philosophical questions, they generally agree on the desirability of a free market and are among the most well-known defenders of capitalism in the twentieth century. The works of both of these intellectual giants are highly recommended for any student of liberty.

 Exhibit I

A Summary Comparison

 

Rand

 

Hayek

Foundation Reality Words and Language
Knowledge Reality is knowable. Skepticism – The idea of things in themselves can be dismissed.
Reason Reason is active, volitional, and efficacious. Reason is passive and a social product.
Progress Based on power of human reason and conscious thought Evolution and social selection
Analytic Method Logical analysis, including induction and deduction Linguistic analysis and empiricism
Theory of Concepts Objective concepts that correspond with reality Coherence or agreement among minds
Freedom Positive case for freedom Negative case for “freedom”
Free Will Man has free will. Man is determined but acts as if he has free will.
Subject of value and unit of social analysis Individual happiness Perpetuation of society (i.e., the group)
The Individual Independent Dependent—man is socially constituted
Rights Based on the nature of the human person Created by society through law
Law Objective Law Rule of Law
Ethics and Morality Rational objective morality based on reason and egoism Evolutionary and emotive ethics based on altruism which is noble but cannot be implemented because of ignorance. Established through habits and imitation
Desired Social System Laissez-faire capitalism Minimal welfare state that supplies public goods and safety net
Highest level of understanding and mental functioning Consciousness of the Individual Meta-conscious framework—neural connections
Philosophical influences Aristotle Ferguson, Smith, Kant, Hume, Popper, Wittgenstein
The Victory of Truth is Never Assured! (2009) – Article by G. Stolyarov II

The Victory of Truth is Never Assured! (2009) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
Originally Published February 4, 2009
as Part of Issue CLXXXVI of The Rational Argumentator
Republished July 22, 2014
******************************
Note from the Author: This essay was originally published as part of Issue CLXXXVI of The Rational Argumentator on February 4, 2009, using the Yahoo! Voices publishing platform. Because of the imminent closure of Yahoo! Voices, the essay is now being made directly available on The Rational Argumentator.
~ G. Stolyarov II, July 22, 2014
***

Many advocates of free markets, reason, and liberty are content to just sit back and let things take their course, thinking that the right ideas will win out, by virtue of being true and therefore in accord with the objective reality. Sooner or later, these people think, the contradictions entailed in false ideas – contradictions obvious to the free-market advocates – will become obvious to everybody. Moreover, false ideas will result in bad consequences that people will rebel against and begin to apply true ideas. While this view is tempting – and I wish it reflected reality – I am afraid that it misrepresents the course that policies and intellectual trends take, as well as the motivations of most human beings.

Why does the truth not always – indeed, virtually never, up until the very recent past – win out in human societies among the majority of people? Indeed, why can one confidently say that most people are wrong about most intellectual matters and matters of policy most of the time? A few reasons will be explored here.

First, the vast majority of people are short-sighted and unaware of secondary effects of their actions. For instance, they see the direct effects of government redistribution of wealth – especially if they are on the receiving end – as positive. They get nice stuff, after all. But the indirect secondary effects – the reduced incentives of the expropriated to produce additional wealth – are not nearly so evident. They require active contemplation, which most people are too busy to engage in at that sophisticated a level.

The second reason why truth rarely wins in human societies – at least in the short-to-intermediate term – is that people’s lifespans are (thus far in our history) finite. While many people do learn from their experiences and from abstract theory and recognize more of the truth as they get older, those people also tend to die at alarming rates and be replaced by newer generations that more often than not make the same mistakes and commit the same fallacies. The prevalence of age-old superstitions – including beliefs in ghosts, faith healing, and socialism – can be explained by the fact that the same tempting fallacies tend to afflict most unprepared minds, and it takes a great deal of time and intellectual training for most people to extricate themselves from them – unless they happened to have particularly enlightened and devoted parents. If all people lived forever, one could expect them to learn from their mistakes and fallacies eventually and for the prevalence of those errors to asymptotically approach zero over time.

The third reason for the difficulty true ideas have in winning is the information problem. No one person has access to all or even a remote fraction of the truth, and certainly no one person can claim to be in possession of all the true ideas required to prevent or even optimally minimize all human folly, aggression, and self-destruction. Moreover, just because a true idea exists somewhere and someone knows it does not mean that many people will be actively seeking it out. Improving information dispersal through such technologies as the Internet certainly helps inform many more people than would have been informed otherwise, but this still requires a fundamental willingness to seek out truth on the part of people. Some have this willingness; others could not care less.

The fourth reason why the truth rarely wins out is that the proponents of false ideas are often persistent, clever, and well organized. They promote their ideas – which they may well believe to be the truth – just as assiduously, if not more so, than the proponents of truth promote their ideas. In fact, how true an idea is might matter when it comes to the long-term viability of the culture and society whose participants adopt it; but it matters little with regard to how persuasive people find the idea. After all, if truth were all that persuaded people, then bizarre beer ads that imply that by drinking beer one will have fancy cars and lots of beautiful women would not persuade anyone. The persistence of advertising that focuses on anything but the actual merits and qualities of the goods and services advertised shows that truth and persuasiveness are two entirely different qualities.

The fifth reason why the truth has a difficult time winning over public opinion is rather unfortunate and may be remedied in time. But many people are, to be polite, intellectually not prepared to understand it. Free-market economics and politics are not easy subjects for everybody to grasp. If a significant fraction of the population in economically advanced countries has trouble remembering basic historical facts or doing basic algebra, how hard must economic and political theory be for such people! I do not believe that any person is incapable of learning these ideas, or any ideas at all. But to teach them takes time that they personally are often unwilling to devote to the task. As economic and technological growth renders more leisure time available to more people, this might change, but for the time being the un-intellectual state of the majority of people is a tremendous obstacle to the spread of true ideas.

It is bad enough that many people are un-intellectual and thus unable to grasp true ideas without a great deal of effort they do not wish to expend. That problem can be remedied with enough material and cultural progress. The greater problem, and the sixth reason why the truth has difficulty taking hold, is that a sizable fraction of the population is also anti-intellectual. They not only cannot or try not to think and learn; they actively despise those who do. Anti-intellectualism is a product of pure envy and malice, much like bullying in the public schools. It led to the genocides of Nazi Germany, the Soviet Union under Stalin, Communist China under Mao, and Communist Cambodia under the Khmer Rouge. In Western schools today, it leads to many of the best and brightest students – who know more of the truth than virtually anyone else – being relentlessly teased, mocked, suppressed, ostracized, and even physically attacked by their jealous and lazy peers as well as by some egalitarian-minded teachers.

But enough about why most people are unreceptive to true ideas. Even those who are receptive have substantial problems that need to be overcome – and most often are not overcome – in order for the truth to win. The seventh reason why the truth rarely wins is that most of the people who do understand it are content to merely contemplate it instead of actively promoting it. They might think that they are powerless to affect the actual course of affairs, and their sole recourse is simply the satisfaction of knowing that they are right while the world keeps senselessly punishing itself – or the satisfaction that at least they are not an active or enthusiastic part of “the system” that leads to bad outcomes. This, I regret to say, is not enough. Knowing that one is right without doing anything about it leads to the field of ideas and actions being wholly open to and dominated by the people who are wrong and whose ideas have dangerous consequences.

Everyone who knows even a shred of the truth wants to be a theorist and expound grand systems about what is or is not right. I know that I certainly do. I also know that theoretical work and continual refinement of theories are essential to any thriving movement for cultural and intellectual change. But while theory is necessary, it is not sufficient. Someone needs to do the often monotonous, often frustrating, often exhausting grunt work of implementing the theories in whatever manner his or her abilities and societal position allow. The free-market movement needs government officials who are willing to engage in pro-liberty reforms. But it also needs ordinary citizens who are willing to write, speak, and attempt to reach out to other people in innovative ways that might just be effective at persuading someone. To promote the truth effectively, a tremendously high premium needs to put on the people who actually apply the true ideas, as opposed to simply contemplating them.

Read other articles in The Rational Argumentator’s Issue CLXXXVI.

Dead Models vs. Living Economics – Article by Sanford Ikeda

Dead Models vs. Living Economics – Article by Sanford Ikeda

The New Renaissance Hat
Sanford Ikeda
November 23, 2013
******************************

Since 2008, straw-man versions of free-market economics have popped up whenever someone needs an easy villain. Keynes roared back to prominence, and it looks like this reaction might be gaining steam.

According to an article in The Guardian, students at a few British universities, prompted by “a leading academic,” are demanding that economics professors stop teaching what they refer to as “neoclassical free-market theories.”

Michael Joffe, an economics professor at Imperial College, said, “The aim should be to provide students with analysis based on the way the world works, not the way theories argue it ought to work.”

Joffe is right on that point. But his target is wrong: It’s not free-market economics that’s the problem, it’s the model of perfect competition that often gets conflated with free-market economics. A commenter on my recent columns addressing falsehoods about the free market (here and here) suggested I discuss this conflation.

I was thinking of putting it into a third “falsehoods” column. But the Guardian story makes me think the issue deserves more attention. Here’s the key passage:

The profession has been criticised for its adherence to models of a free market that claim to show demand and supply continually rebalancing over relatively short periods of time—in contrast to the decade-long mismatches that came ahead of the banking crash in key markets such as housing and exotic derivatives, where asset bubbles ballooned [emphasis added].

Why Do You Support the Free Market?

“Free-market economists,” on the other hand, typically have confidence in free markets owing to our understanding of economics, although we often (notoriously) disagree on exactly what the correct economics is. A number of free-market economists base their confidence on what is known as the model of “perfect competition.” Briefly, that model shows how in the long run the price of a good in a competitive market will equal the additional cost of producing a unit of that good (i.e., its marginal cost), and it shows that no one has the power to set prices on her own. How do you get those results? By making something like the following assumptions:

  1. Free entry: While buyers and sellers may incur costs to consume and to produce, there are no additional costs to enter or leave a market.
  2. Product homogeneity: From the point of view of any buyer in the market, the output of one seller is a perfect substitute for the output of any other seller.
  3. Many buyers and sellers: No single buyer or seller is large enough to independently raise or lower the market price.
  4. Perfect knowledge: All buyers and sellers have so much information that they will never regret any action they take.

From these assumptions you can derive not only marginal-cost pricing but also nice efficiency properties as well: There is no waste and costs are minimized. Which is why people like the model.

Moreover, for some important questions the analysis of supply and demand under perfect competition is quite useful. Push the legal minimum wage too high and you’ll generate unemployment; push the maximum rent-control rate too low and you’ll get housing shortages. Also, financial markets sometimes—though as we have seen, not always—conform to the predictions of perfect competition. It’s a robust theory in many ways, but if you base your support for the free market on the model of perfect competition, you’re on shaky ground. The evidence against it is pretty devastating.

Free Entry, Not Perfect Knowledge

In fact, it doesn’t even take the Panic of 2008 to shake up the model; any comparison of the model with everyday reality would do the job. Assumptions two and three about product homogeneity and many buyers and sellers are pretty unrealistic, but it’s the last assumption about perfect knowledge that’s the killer. (I’m aware of Milton Friedman’s “twist” (PDF), which argues that this is irrelevant and only predictions matter, but it’s a methodology I don’t agree with.) Markets are rarely if ever at or near equilibrium, and people with imperfect knowledge make disequilibrating mistakes, even without the kind of government intervention that caused the Panic of 2008.

When the institutions are right, however, people learn from the mistakes that they or others make, and there’s a theory of markets—certainly neither Keynesian nor Marxist—that fits the bill better than perfect competition.

It’s Austrian theory. Its practitioners argue competition is an entrepreneurial-competitive process (PDF). This theory not only says that competition exists in the presence of ignorance, error, and disequilibrium, it explains how profit-seeking entrepreneurs in a free market positively thrive in this environment. The principal assumption that the theory rests on, besides the existence of private property, is No. 1: free entry.

As long as there are no legal barriers to entry, if Jack wants to sell an apple for $1 and Jill is asking $2 for that same quality apple—that is, there is a disequilibrium here in which either Jack or Jill (or both) is making an error—you can profit by buying low from Jack and selling high to Jill’s customer, Lucy. If another entrepreneur, Linus, spots what you’re doing, he can bid up the price you’re giving Jack and bid down the price at which you’re selling to Lucy. Bottom line: A process of entrepreneurial competition tends to remove errors. There is no need to assume perfect knowledge to get a competitive outcome; instead, competition itself improves the level of knowledge.

So Joffe and the critics are wrong about the theory. You don’t knock out the theoretical legs from under the free market by “debunking” the model of perfect competition. He is also wrong about the history. As I’ve referenced many times, economists Steve Horwitz and Pete Boettke have documented how a government-led, interventionist dynamic, and not the free market, led to the Panic of 2008.

Joffe, the Imperial College professor, “called for economics courses to embrace the teachings of Marx and Keynes to undermine the dominance of neoclassical free-market theories.” He also complains that “there is a lot that is taught on [sic] economics courses that bears little relation to the way things work in the real world.” I agree. But that complaint would apply at least as much to the Keynesian and Marxian economics he hypes as to the static, equilibrium-based models of competition he slams.

Sanford Ikeda is an associate professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.
***
This article was originally published by The Foundation for Economic Education.
Libertarians and Voting – Article by Alex Salter

Libertarians and Voting – Article by Alex Salter

The New Renaissance Hat
Alex Salter
November 5, 2013
******************************

While it’s often dangerous to make blanket statements about sociopolitical movements, it’s not a stretch to say libertarians have a contentious relationship with voting.

Many libertarians don’t vote at all, and cite positive (as opposed to normative) reasons for doing so. The standard argument goes something like this: Voting in an election is costly, in the sense that it takes time that could have been used doing something else. However, in the vast majority of elections—and probably all elections that matter for who gets to determine significant aspects of policy—each individual vote, taken by itself, is worthless. The voting populace is so large that the probability that the marginal vote affects the outcome of the election is virtually zero. To the extent that one votes solely for the sake of impacting the outcome of elections, the costs of voting outweigh the expected benefits, defined as the probability one’s vote is decisive multiplied by the payoff from having one’s preferred candidate win. A really big payoff, multiplied by zero, is zero.

This argument is a staple of the academic literature in political economy and public choice. It’s used to explain many phenomena, the most prominent of which is rational ignorance. Since each individual’s vote doesn’t matter, no individual has any incentive to become informed on the issues. As such, voters acting rationally remain largely uninformed. As an explanation for an observed phenomenon in political life, it is impeccably reasoned and extremely useful for academic research. However, as an explanation for why individual libertarians refrain from voting, it is potentially quite dangerous.

Voting is a quintessential collective-action problem. Policy would be more libertarian at the margin if libertarians showed up en masse to vote on election day. But for each individual libertarian voter, voting is costly. Furthermore, the benefits of a more libertarian polity are available to each libertarian whether he votes or not. Each libertarian potential voter thus acts according to his own self-interest and stays home, even though if some mechanism were used to get all libertarians to vote, each of them would be better off.

Why is using this argument for abstaining from voting dangerous?  The answer lies in a significant reason why libertarians are libertarians. Many who are not libertarians advocate government provision of goods and services such as roads or education on the grounds that collective-action problems would result in these goods and services being undersupplied. Libertarians rightly respond that this is nonsense. History is full of examples of privately supplied roads and education, not to mention more difficult cases. The existence of a collective-action problem is not a sufficient argument for government intervention. To believe otherwise is to ignore the creative and imaginative capacities of individuals engaging in private collective action to overcome collective-action problems.

Every time a libertarian points to the collective-action problem as a reason for abstaining from voting, he weakens, at least partially, the argument that individuals in their private capacity can overcome these kinds of problems. By suggesting we cannot overcome a relatively simple collective-action problem like voting, our illustrations of ways other collective-action problems have been solved privately, and arguments for how such problems might be solved privately going forward, may appear disingenuous.

Looking at the problem more closely, there are all sorts of ways libertarians can solve the collective-action problem associated with voting. Libertarians could meet throughout the year in social groups dedicated to furthering their education by, say, reading Human Action together, and follow up such meetings with dinner parties or social receptions. The price tag for admission to such groups could be meeting at a predetermined time and place on Election Day and voting. This coupling of mild political activism with other desirable activities is an example of bundling, a very common mechanism by which collective goods and services have been privately supplied throughout history.

At this point, a few caveats are in order.

First, this potential solution is irrelevant for those who refuse to engage in the political process for ethical reasons. A libertarian could find the current popular interpretation of the “social contract” so unacceptable that any engagement in the political process cannot be justified. Second, even after deriving mechanisms for overcoming the voting collective-action problem, individuals’ opportunity cost of participating exceeds the expected benefit. Academics who are libertarians — who must spend significant time engaging highly technical scholarly literature to further their careers — would be most likely to cite this argument, and they may very well be right to do so. Third, organizing “voting clubs” large enough to have a chance of mattering for election outcomes may itself be prohibitively costly. Such is most likely to be true in national elections.

But if these reasons or others are why libertarians abstain from voting, they should say so. Citing the collective-action problem by itself is not enough, and it undermines the argument that purposeful human actors can overcome collective-action problems through voluntary association.

Alex Salter is a Ph.D. student in economics at George Mason University.

This article was originally published by The Foundation for Economic Education.
Illiberal Belief #22: Persuasion is Force – Article by Bradley Doucet

Illiberal Belief #22: Persuasion is Force – Article by Bradley Doucet

The New Renaissance Hat
Bradley Doucet
October 13, 2013
******************************
I must admit, I love a good television commercial. The creativity that goes into the best TV ad is as impressive and enjoyable to me as a quality drama, comedy, or documentary. “You feel sad for the Moo Cow Milker? That is because you are crazy. Tacky items can easily be replaced with better IKEA.” But damn those clever Swedes! They have, through the alchemy of advertising, forced me into outfitting my entire apartment with their stylish yet affordable household items.I kid, of course; but there is a certain line of thought out there that cannot abide advertising, and that credits it with all manner of evil. Advertising, they say, makes us fat by brainwashing us into wanting fast food and sugary cereal. It makes men want to buy beer, fancy cars, or anything else associated with hot women. (A current TV commercial makes fun of the “scantily-clad women washing car” cliché by having a group of sumo wrestlers wash a new Subaru.) Advertising makes women dissatisfied with their appearance and hence creates a need for fashion and beauty products that would not otherwise exist. Yes, because as we all know, humans do not naturally enjoy fatty, sugary foods, men would not drink beer or drive fancy cars in the absence of advertising, and women need corporations to teach them to care about their looks. Puh-lease.

Think of the Children

Advertising is about the transmission of information, and it is also about convincing people to buy something. In other words, it is a form of persuasion, but this use of persuasion is implicitly equated with the use of force by its detractors. Sometimes, as in the case of the French website RAP (“Résistance à l’Agression Publicitaire” or “Resistance to Advertising Aggression”), the equating of persuasion and force is explicit. The site features an illustration of a police officer brandishing a billy club accompanied by the slogan, “Ne vous laissez pas matraquer par la pub,” which translates, “Don’t let yourself be bludgeoned by advertising.”

Usually, though, the message is less overt, as it is on Commercial Alert’s website, whose slogan is “Protecting communities from commercialism.” The site complains about the psychology profession “helping corporations influence children for the purpose of selling products to them.” Here, the word “influence” seems none too menacing, but its effect is quickly bolstered by the words “crisis,” “epidemic,” “complicity,” and “onslaught.” Force may not be explicitly mentioned, but these words bring to mind infectious disease, crime, and violent conquest. Without coming right out and saying it, the implication is clear―although one could argue, ironically enough, that this effect was meant to be subliminal.

Now, are children more vulnerable than adults to the persuasive nature of advertising? Of course they are, especially when very young. But it is part of the job of parents (and later, teachers) to equip children with the tools necessary to judge competing claims and see through manipulative techniques. I’ll be the first to admit that there is room for improvement in this area―and a free market in education would go a long way toward providing that improvement―but as far as advertising goes, most kids are savvy to the more outlandish claims well before they even reach adolescence. As people grow up, they learn through experience that beer doesn’t bring babes (though a little may beneficially lower one’s own inhibitions) and that makeup will only get you so far. At any rate, treating all adults like children is hardly a fair way to deal with the fact that some minority of people will remain gullible their entire lives.

Of Words and Bullets

Many of those who really hate advertising share a worldview that involves rich, powerful corporations controlling everything. In fact, there is a sense in which this view has some merit, for it is true that large corporations often gain unfair advantage over their competitors, suppliers, and customers. When this happens, though, it happens through the gaining of political influence, which means the use of actual, legally sanctioned force to hogtie the competition, restrict consumers’ choices, or extract taxpayers’ hard-earned income. In a truly free market, the government would not have the authority to dole out special privileges, as it does in our mixed economies. Without any goodies to fight over, corporations would have no legal means of squashing competitors and could only succeed by being as efficient as possible and persuading customers to buy their products (and if their products do not satisfy, they will not get many repeat customers). To target this persuasion as a serious problem when actual, legal force is being used surely reveals an inverted sense of priorities, or at least a serious misunderstanding about the sources of society’s woes.

Another example of the implicit equating of persuasion with force is the thinking behind legislated limits on the amounts individuals can spend expressing their political views during an election―in essence, limits on political advertising. Here, as in commercial advertising, the purpose is clear: if persuasion is force, then the government is perfectly justified in countering that initiation of force with retaliatory force. If words are bullets, then words can be met with bullets. But it is clear what happens to free speech in such a scenario. Instead of competing voices clamouring for your attention, one monolithic government propaganda machine decides what can and cannot be said. In the political realm, this works against new or historically small parties trying to break through since they have a disproportionately hard time attracting many small contributions in order to pay for ads to get their message out. This leads to a situation in which a couple of largely indistinguishable parties become more and more firmly entrenched.

In fact, the notion that persuasion is force brings to mind nothing so much as George Orwell’s novel, 1984, in which the government has destroyed the precision of words by continually reinforcing its contradictory slogans: war is peace, freedom is slavery, ignorance is power, and love is hate. It is shocking to observe the smug self-righteousness of those who hold forth on the enormous manipulative power of advertising and who are so sure that they, of all people, have not been brainwashed. But in fact, it is they who have been, if not brainwashed, then at least misled about the relative power of advertising versus the average Joe’s ability to think and judge for himself. They have bought, hook, line, and sinker, the most superficial critique of capitalism, when our mixed form of capitalism has plenty of real abuses crying out for correction.

The Power of Persuasion

The point is not that persuasion is powerless. I am engaged in trying to persuade you of something right now, and if I didn’t think I had a chance of succeeding, I wouldn’t waste my time. The point, rather, is that persuasion must be met with persuasion, words and rhetorical techniques must be answered with more words and more rhetoric. If free competition is allowed in the marketplace of ideas, no one’s victory is assured, and we needn’t fret too much over the use of psychological tricks, because the trickster’s competitors can use them too, or overtly challenge them instead. (See Gennady Stolyarov II’s article “The Victory of Truth Is Never Assured!” for a related call to action.)

If we are still worried, though, it is undeniable that better education―freer education―would produce a less pliant population, especially important for the issue of political persuasion. The other thing that would help is fighting for full freedom of competition, in both commerce (no special government privileges) and politics (no limits on political speech). In other words, we need to eliminate the government’s use of force in the realms of education, commerce, and political campaigning. Agitating for the government to solve our problems for us with the use of more force will only make matters worse, and further infantilize us in the process.

Bradley Doucet is Le Québécois Libre‘s English Editor. A writer living in Montreal, he has studied philosophy and economics, and is currently completing a novel on the pursuit of happiness. He also writes for The New Individualist, an Objectivist magazine published by The Atlas Society, and sings.
Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 3, 2013
******************************
This essay is the tenth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first nine chapters were previously published on The Rational Argumentator under the following titles:
***

Operational Continuity

One of the reasons for continuing conceptual development of the physical-functionalist NRU (neuron-replication-unit) approach, despite the perceived advantages of the informational-functionalist approach, was in the event that computational emulation would either fail to successfully replicate a given physical process (thus a functional-modality concern) or fail to successfully maintain subjective-continuity (thus an operational-modality concern), most likely due to a difference in the physical operation of possible computational substrates compared to the physical operation of the brain (see Chapter 2). In regard to functionality, we might fail to computationally replicate (whether in simulation or emulation) a relevant physical process for reasons other than vitalism. We could fail to understand the underlying principles governing it, or we might understand its underlying principles so as to predictively model it yet still fail to understand how it affects the other processes occurring in the neuron—for instance if we used different modeling techniques or general model types to model each component, effectively being able to predictively model each individually while being unable to model how they affect eachother due to model untranslatability. Neither of these cases precludes the aspect in question from being completely material, and thus completely potentially explicable using the normative techniques we use to predictively model the universe. The physical-functionalist approach attempted to solve these potential problems through several NRU sub-classes, some of which kept certain biological features and functionally replaced certain others, and others that kept alternate biological features and likewise functionally replicated alternate biological features. These can be considered as varieties of biological-nonbiological NRU hybrids that functionally integrate those biological features into their own, predominantly non-biological operation, as they exist in the biological nervous system, which we failed to functionally or operationally replicate successfully.

The subjective-continuity problem, however, is not concerned with whether something can be functionally replicated but with whether it can be functionally replicated while still retaining subjective-continuity throughout the procedure.

This category of possible basis for subjective-continuity has stark similarities to the possible problematic aspects (i.e., operational discontinuity) of current computational paradigms and substrates discussed in Chapter 2. In that case it was postulated that discontinuity occurred as a result of taking something normally operationally continuous and making it discontinuous: namely, (a) the fact that current computational paradigms are serial (whereas the brain has massive parallelism), which may cause components to only be instantiated one at a time, and (b) the fact that the resting membrane potential of biological neurons makes them procedurally continuous—that is, when in a resting or inoperative state they are still both on and undergoing minor fluctuations—whereas normative logic gates both do not produce a steady voltage when in an inoperative state (thus being procedurally discontinuous) and do not undergo minor fluctuations within such a steady-state voltage (or, more generally, a continuous signal) while in an inoperative state. I had a similar fear in regard to some mathematical and computational models as I understood them in 2009: what if we were taking what was a continuous process in its biological environment, and—by using multiple elements or procedural (e.g., computational, algorithmic) steps to replicate what would have been one element or procedural step in the original—effectively making it discontinuous by introducing additional intermediate steps? Or would we simply be introducing a number of continuous steps—that is, if each element or procedural step were operationally continuous in the same way that the components of a neuron are, would it then preserve operational continuity nonetheless?

This led to my attempting to develop a modeling approach aiming to retain the same operational continuity as exists in biological neurons, which I will call the relationally isomorphic mathematical model. The biophysical processes comprising an existing neuron are what implements computation; by using biophysical-mathematical models as our modeling approach, we might be introducing an element of discontinuity by mathematically modeling the physical processes giving rise to a computation/calculation, rather than modeling the computation/calculation directly. It might be the difference between modeling a given program, and the physical processes comprising the logic elements giving rise to the program. Thus, my novel approach during this period was to explore ways to model this directly.

Rather than using a host of mathematical operations to model the physical components that themselves give rise to a different type of mathematics, we instead use a modeling approach that maintains a 1-to-1 element or procedural-step correspondence with the level-of-scale that embodies the salient (i.e., aimed-for) computation. My attempts at developing this produced the following approach, though I lack the pure mathematical and computer-science background to judge its true accuracy or utility. The components, their properties, and the inputs used for a given model (at whatever scale) are substituted by numerical values, the magnitude of which preserves the relationships (e.g., ratio relationships) between components/properties and inputs, and by mathematical operations which preserve the relationships exhibited by their interaction. For instance: if the interaction between a given component/property and a given input produces an emergent inhibitory effect biologically, then one would combine them to get their difference or their factors, respectively, depending on whether they exemplify a linear or nonlinear relationship. If the component/property and the input combine to produce emergently excitatory effects biologically, one would combine them to get their sum or products, respectively, depending on whether they increased excitation in a linear or nonlinear manner.

In an example from my notes, I tried to formulate how a chemical synapse could be modeled in this way. Neurotransmitters are given analog values such as positive or negative numbers, the sign of which (i.e., positive or negative) depends on whether it is excitatory or inhibitory and the magnitude of which depends on how much more excitatory/inhibitory it is than other neurotransmitters, all in reference to a baseline value (perhaps 0 if neutral or neither excitatory nor inhibitory; however, we may need to make this a negative value, considering that the neuron’s resting membrane-potential is electrically negative, and not electrochemically neutral). If they are neurotransmitter clusters, then one value would represent the neurotransmitter and another value its quantity, the sum or product of which represents the cluster. If the neurotransmitter clusters consist of multiple neurotransmitters, then two values (i.e., type and quantity) would be used for each, and the product of all values represents the cluster. Each summative-product value is given a second vector value separate from its state-value, representing its direction and speed in the 3D space of the synaptic junction. Thus by summing the products of all, the numerical value should contain the relational operations each value corresponds to, and the interactions and relationships represented by the first- and second-order products. The key lies in determining whether the relationship between two elements (e.g., two neurotransmitters) is linear (in which case they are summed), or nonlinear (in which case they are combined to produce a product), and whether it is a positive or negative relationship—in which case their factor, rather than their difference, or their product, rather than their sum, would be used. Combining the vector products would take into account how each cluster’s speed and position affects the end result, thus effectively emulating the process of diffusion across the synaptic junction. The model’s past states (which might need to be included in such a modeling methodology to account for synaptic plasticity—e.g., long-term potentiation and long-term modulation) would hypothetically be incorporated into the model via a temporal-vector value, wherein a third value (position along a temporal or “functional”/”operational” axis) is used when combining the values into a final summative product. This is similar to such modeling techniques as phase-space, which is a quantitative technique for modeling a given system’s “system-vector-states” or the functional/operational states it has the potential to possess.

How excitatory or inhibitory a given neurotransmitter is may depend upon other neurotransmitters already present in the synaptic junction; thus if the relationship between one neurotransmitter and another is not the same as that first neurotransmitter and an arbitrary third, then one cannot use static numerical values for them because the sequence in which they were released would affect how cumulatively excitatory or inhibitory a given synaptic transmission is.

A hypothetically possible case of this would be if one type of neurotransmitter can bond or react with two or more types of neurotransmitter. Let’s say that it’s more likely to bond or react with one than with the other. If the chemically less attractive (or reactive) one were released first, it would bond anyways due to the absence of the comparatively more chemically attractive one, such that if the more attractive one were released thereafter, then it wouldn’t bond because the original one would have already bonded with the chemically less attractive one.

If a given neurotransmitter’s numerical value or weighting is determined by its relation to other neurotransmitters (i.e., if one is excitatory, and another is twice as excitatory, then if the first was 1.5, the second would be 3—assuming a linear relationship), and a given neurotransmitter does prove to have a different relationship to one neurotransmitter than it does another, then we cannot use a single value for it. Thus we might not be able to configure it such that the normative mathematical operations follow naturally from each other; instead, we may have to computationally model (via the [hypothetically] subjectively discontinuous method that incurs additional procedural steps) which mathematical operations to perform, and then perform them continuously without having to stop and compute what comes next, so as to preserve subjective-continuity.

We could also run the subjectively discontinuous model at a faster speed to account for its higher quantity of steps/operations and the need to keep up with the relationally isomorphic mathematical model, which possesses comparatively fewer procedural steps. Thus subjective-continuity could hypothetically be achieved (given the validity of the present postulated basis for subjective-continuity—operational continuity) via this method of intermittent external intervention, even if we need extra computational steps to replicate the single informational transformations and signal-combinations of the relationally isomorphic mathematical model.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

Wireless Synapses, Artificial Plasticity, and Neuromodulation – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 21, 2013
******************************
This essay is the fifth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first four chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death”, “Immortality: Material or Ethereal? Nanotech Does Both!, “Concepts for Functional Replication of Biological Neurons“, and “Gradual Neuron Replacement for the Preservation of Subjective-Continuity“.
***

Morphological Changes for Neural Plasticity

The finished physical-functionalist units would need the ability to change their emergent morphology not only for active modification of single-neuron functionality but even for basic functional replication of normative neuron behavior, by virtue of needing to take into account neural plasticity and the way that morphological changes facilitate learning and memory. My original approach involved the use of retractable, telescopic dendrites and axons (with corresponding internal retractable and telescopic dendritic spines and axonal spines, respectively) activated electromechanically by the unit-CPU. For morphological changes, by providing the edges of each membrane section with an electromechanical hinged connection (i.e., a means of changing the angle of inclination between immediately adjacent sections), the emergent morphology can be controllably varied. This eventually developed to consist of an internal compartment designed so as to detach a given membrane section, move it down into the internal compartment of the neuronal soma or terminal, transport it along a track that stores alternative membrane sections stacked face-to-face (to compensate for limited space), and subsequently replaces it with a membrane section containing an alternate functional component (e.g., ion pump, ion channel, [voltage-gated or ligand-gated], etc.) embedded therein. Note that this approach was also conceived of as an alternative to retractable axons/dendrites and axonal/dendritic spines, by attaching additional membrane sections with a very steep angle of inclination (or a lesser inclination with a greater quantity of segments) and thereby creating an emergent section of artificial membrane that extends out from the biological membrane in the same way as axons and dendrites.

However, this approach was eventually supplemented by one that necessitates less technological infrastructure (i.e., that was simpler and thus more economical and realizable). If the size of the integral-membrane components is small enough (preferably smaller than their biological analogues), then differential activation of components or membrane sections would achieve the same effect as changing the organization or type of integral-membrane components, effectively eliminating the need at actually interchange membrane sections at all.

Active Neuronal Modulation and Modification

The technological and methodological infrastructure used to facilitate neural plasticity can also be used for active modification and modulation of neural behavior (and the emergent functionality determined by local neuronal behavior) towards the aim of mental augmentation and modification. Potential uses already discussed include mental amplification (increasing or augmenting existing functional modalities—i.e., intelligence, emotion, morality), or mental augmentation (the creation of categorically new functional and experiential modalities). While the distinction between modification and modulation isn’t definitive, a useful way of differentiating them is to consider modification as morphological changes creating new functional modalities, and to consider modulation as actively varying the operation of existing structures/processes through not morphological change but rather changes to the operation of integral-membrane components or the properties of the local environment (e.g., increasing local ionic concentrations).

Modulation: A Less Discontinuous Alternative to Morphological Modification

The use of modulation to achieve the effective results of morphological changes seemed like a hypothetically less discontinuous alternative to morphological changes (and thus as having a hypothetically greater probability of achieving subjective-continuity). I’m more dubious in regards to the validity of this approach now, because the emergent functionality (normatively determined by morphological features) is still changed in an effectively equivalent manner.

The Eventual Replacement of Neural Ionic Solutions with Direct Electric Fields

Upon full gradual replacement of the CNS with physical-functionalist equivalents, the preferred embodiment consisted of replacing the ionic solutions with electric fields that preserve the electric potential instantiated by the difference in ionic concentrations on the respective sides of the membrane. Such electric fields can be generated directly, without recourse to electrochemicals for manifesting them. In such a case the integral-membrane components would be replaced by a means of generating and maintaining a static and/or dynamic electric field on either side of the membrane, or even merely of generating an electrical potential (i.e., voltage—a broader category encompassing electric fields) with solid-state electronics.

This procedure would allow a fraction of the speedups (that is, increased rate of subjective perception of time, which extends to speed of thought) resulting from emulatory (i.e., strictly computational) replication-methods by no longer being limited to the rate of passive ionic diffusion—now instead being limited to the propagation velocity of electric or electromagnetic fields.

Wireless Synapses

If we replace the physical synaptic connections the NRU uses to communicate (with both existing biological neurons and with other NRUs) with a wireless means of synaptic-transmission, we can preserve the same functionality (insofar as it is determined by synaptic connectivity) while allowing any NRU to communicate with any other NRU or biological neuron in the brain at potentially equal speed. First we need a way of converting the output of an NRU or biological neuron into information that can be transmitted wirelessly. For cyber-physicalist-functionalist NRUs, regardless of their sub-class, this requires no new technological infrastructure because they already deal with 2nd-order (i.e., not structurally or directly embodied) information; informational-functional NRU deals solely in terms of this type of information, and the cyber-physical-systems sub-class of the physicalist-functionalist NRUs deal with this kind of information in the intermediary stage between sensors and actuators—and consequently, converting what would have been a sequence of electromechanical actuations into information isn’t a problem. Only the passive-physicalist-functionalist NRU class requires additional technological infrastructure to accomplish this, because they don’t already use computational operational-modalities for their normative operation, whereas the other NRU classes do.

We dispose receivers within the range of every neuron (or alternatively NRU) in the brain, connected to actuators – the precise composition of which depends on the operational modality of the receiving biological neuron or NRU. The receiver translates incoming information into physical actuations (e.g., the release of chemical stores), thereby instantiating that informational output in physical terms. For biological neurons, the receiver’s actuators would consist of a means of electrically stimulating the neuron and releasable chemical stores of neurotransmitters (or ionic concentrations as an alternate means of electrical stimulation via the manipulation of local ionic concentrations). For informational-functionalist NRUs, the information is already in a form it can accept; it can simply integrate that information into its extant model. For cyber-physicalist-NRUs, the unit’s CPU merely needs to be able to translate that information into the sequence in which it must electromechanically actuate its artificial ion-channels. For the passive-physicalist (i.e., having no computational hardware devoted to operating individual components at all, operating according to physical feedback between components alone) NRUs, our only option appears to be translating received information into the manipulation of the local environment to vicariously affect the operation of the NRU (e.g., increasing electric potential through manipulations of local ionic concentrations, or increasing the rate of diffusion via applied electric fields to attract ions and thus achieve the same effect as a steeper electrochemical gradient or potential-difference).

The technological and methodological infrastructure for this is very similar to that used for the “integrational NRUs”, which allows a given NRU-class to communicate with either existing biological neurons or NRUs of an alternate class.

Integrating New Neural Nets Without Functional Distortion of Existing Regions

The use of artificial neural networks (which here will designate NRU-networks that do not replicate any existing biological neurons, rather than the normative Artificial Neuron Networks mentioned in the first and second parts of this essay), rather than normative neural prosthetics and BCI, was the preferred method of cognitive augmentation (creation of categorically new functional/experiential modalities) and cognitive amplification (the extension of existing functional/experiential modalities). Due to functioning according to the same operational modality as existing neurons (whether biological or artificial-replacements), they can become a continuous part of our “selves”, whereas normative neural prosthetics and BCI are comparatively less likely to be capable of becoming an integral part of our experiential continuum (or subjective sense of self) due to their significant operational dissimilarity in relation to biological neural networks.

A given artificial neural network can be integrated with existing biological networks in a few ways. One is interior integration, wherein the new neural network is integrated so as to be “inter-threaded”, in which a given artificial-neuron is placed among one or multiple existing networks. The networks are integrated and connected on a very local level. In “anterior” integration, the new network would be integrated in a way comparable to the connection between separate cortical columns, with the majority of integration happening at the peripherals of each respective network or cluster.

If the interior integration approach is used then the functionality of the region may be distorted or negated by virtue of the fact that neurons that once took a certain amount of time to communicate now take comparatively longer due to the distance between them having been increased to compensate for the extra space necessitated by the integration of the new artificial neurons. Thus in order to negate these problematizing aspects, a means of increasing the speed of communication (determined by both [a] the rate of diffusion across the synaptic junction and [b] the rate of diffusion across the neuronal membrane, which in most cases is synonymous with the propagation velocity in the membrane – the exception being myelinated axons, wherein a given action potential “jumps” from node of Ranvier to node of Ranvier; in these cases propagation velocity is determined by the thickness and length of the myelinated sections) must be employed.

My original solution was the use of an artificial membrane morphologically modeled on a myelinated axon that possesses very high capacitance (and thus high propagation velocity), combined with increasing the capacitance of the existing axon or dendrite of the biological neuron. The cumulative capacitance of both is increased in proportion to how far apart they are moved. In this way, the propagation velocity of the existing neuron and the connector-terminal are increased to allow the existing biological neurons to communicate as fast as they would have prior to the addition of the artificial neural network. This solution was eventually supplemented by the wireless means of synaptic transmission described above, which allows any neuron to communicate with any other neuron at equal speed.

Gradually Assigning Operational Control of a Physical NRU to a Virtual NRU

This approach allows us to apply the single-neuron gradual replacement facilitated by the physical-functionalist NRU to the informational-functionalist (physically embodied) NRU. A given section of artificial membrane and its integral membrane components are modeled. When this model is functioning in parallel (i.e., synchronization of operative states) with its corresponding membrane section, the normative operational routines of that artificial membrane section (usually controlled by the unit’s CPU and its programming) are subsequently taken over by the computational model—i.e., the physical operation of the artificial membrane section is implemented according to and in correspondence with the operative states of the model. This is done iteratively, with the informationalist-functionalist NRU progressively controlling more and more sections of the membrane until the physical operation of the whole physical-functionalist NRU is controlled by the informational operative states of the informationalist-functionalist NRU. While this concept sprang originally from the approach of using multiple gradual-replacement phases (with a class of model assigned to each phase, wherein each is more dissimilar to the original than the preceding phase, thereby increasing the cumulative degree of graduality), I now see it as a way of facilitating sub-neuron gradual replacement in computational NRUs. Also note that this approach can be used to go from existing biological membrane-sections to a computational NRU, without a physical-functionalist intermediary stage. This, however, is comparatively more complex because the physical-functionalist NRU already has a means of modulating its operative states, whereas the biological neuron does not. In such a case the section of lipid bilayer membrane would presumably have to be operationally isolated from adjacent sections of membrane, using a system of chemical inventories (of either highly concentrated ionic solution or neurotransmitters, depending on the area of membrane) to produce electrochemical output and chemical sensors to accept the electrochemical input from adjacent sections (i.e., a means of detecting depolarization and hyperpolarization). Thus to facilitate an action potential, for example, the chemical sensors would detect depolarization, the computational NRU would then model the influx of ions through the section of membrane it is replacing and subsequently translate the effective results impinging upon the opposite side to that opposite edge via either the release of neurotransmitters or the manipulation of local ionic concentrations so as to generate the required depolarization at the adjacent section of biological membrane.

Integrational NRU

This consisted of a unit facilitating connection between emulatory (i.e., informational-functionalist) units and existing biological neurons. The output of the emulatory units is converted into chemical and electrical output at the locations where the emulatory NRU makes synaptic connection with other biological neurons, facilitated through electric stimulation or the release of chemical inventories for the increase of ionic concentrations and the release of neurotransmitters, respectively. The input of existing biological neurons making synaptic connections with the emulatory NRU is read, likewise, by chemical and electrical sensors and is converted into informational input that corresponds to the operational modality of the informationalist-functionalist NRU classes.

Solutions to Scale

If we needed NEMS or something below the scale of the present state of MEMS for the technological infrastructure of either (a) the electromechanical systems replicating a given section of neuronal membrane, or (b) the systems used to construct and/or integrate the sections, or those used to remove or otherwise operationally isolate the existing section of lipid bilayer membrane being replaced from adjacent sections, a postulated solution consisted of taking the difference in length between the artificial membrane section and the existing bilipid section (which difference is determined by how small we can construct functionally operative artificial ion-channels) and incorporating this as added curvature in the artificial membrane-section such that its edges converge upon or superpose with the edges of the space left by the removal the lipid bilayer membrane-section. We would also need to increase the propagation velocity (typically determined by the rate of ionic influx, which in turn is typically determined by the concentration gradient or difference in the ionic concentrations on the respective sides of the membrane) such that the action potential reaches the opposite end of the replacement section at the same time that it would normally have via the lipid bilayer membrane. This could be accomplished directly by the application of electric fields with a charge opposite that of the ions (which would attract them, thus increasing the rate of diffusion), by increasing the number of open channels or the diameter of existing channels, or simply by increasing the concentration gradient through local manipulation of extracellular and/or intracellular ionic concentration—e.g., through concentrated electrolyte stores of the relevant ion that can be released to increase the local ionic concentration.

If the degree of miniaturization is so low as to make this approach untenable (e.g., increasing curvature still doesn’t allow successful integration) then a hypothesized alternative approach was to increase the overall space between adjacent neurons, integrate the NRU, and replace normative connection with chemical inventories (of either ionic compound or neurotransmitter) released at the site of existing connection, and having the NRU (or NRU sub-section—i.e., artificial membrane section) wirelessly control the release of such chemical inventories according to its operative states.

The next chapter describes (a) possible physical bases for subjective-continuity through a gradual-uploading procedure and (b) possible design requirements for in vivo brain-scanning and for systems to construct and integrate the prosthetic neurons with the existing biological brain.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Project Avatar (2011). Retrieved February 28, 2013 from http://2045.com/tech2/

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

Concepts for Functional Replication of Biological Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 18, 2013
******************************
This essay is the third chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first two chapters were previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death” and “Immortality: Material or Ethereal? Nanotech Does Both!“.
***

The simplest approach to the functional replication of biological neurons I conceived of during this period involved what is normally called a “black-box” model of a neuron. This was already a concept in the wider brain-emulation community, but I was yet to find out about it. This is even simpler than the mathematically weighted Artificial Neurons discussed in the previous chapter. Rather than emulating or simulating the behavior of a neuron, (i.e, using actual computational—or more generally signal—processing) we (1) determine the range of input values that a neuron responds to, (2) stimulate the neuron at each interval (the number of intervals depending on the precision of the stimulus) within that input-range, and (3) record the corresponding range of outputs.

This reduces the neuron to essentially a look-up-table (or, more formally, an associative array). The input ranges I originally considered (in 2007) consisted of a range of electrical potentials, but later (in 2008) were developed to include different cumulative organizations of specific voltage values (i.e., some inputs activated and others not) and finally the chemical input and outputs of neurons. The black-box approach was eventually seen as being applied to the sub-neuron scale—e.g., to sections of the cellular membrane. This creates a greater degree of functional precision, bringing the functional modality of the black-box NRU-class in greater accordance with the functional modality of biological neurons. (I.e., it is closer to biological neurons because they do in fact process multiple inputs separately, rather than singular cumulative sums at once, as in the previous versions of the black-box approach.) We would also have a higher degree of variability for a given quantity of inputs.

I soon chanced upon literature dealing with MEMS (micro-electro-mechanical systems) and NEMS (nano-electro-mechanical systems), which eventually led me to nanotechnology and its use in nanosurgery in particular. I saw nanotechnology as the preferred technological infrastructure regardless of the approach used; its physical nature (i.e., operational and functional modalities) could facilitate the electrical and chemical processes of the neuron if the physicalist-functionalist (i.e., physically embodied or ‘prosthetic’) approach proved either preferable or required, while the computation required for its normative functioning (regardless of its particular application) assured that it could facilitate the informationalist-functionalist (i.e., computational emulation or simulation) of neurons if that approach proved preferable. This was true of MEMS as well, with the sole exception of not being able to directly synthesize neurotransmitters via mechanosynthesis, instead being limited in this regard to the release of pre-synthesized biochemical inventories. Thus I felt that I was able to work on conceptual development of the methodological and technological infrastructure underlying both (or at least variations to the existing operational modalities of MEMS and NEMS so as to make them suitable for their intended use), without having to definitively choose one technological/methodological infrastructure over the other. Moreover, there could be processes that are reducible to computation, yet still fail to be included in a computational emulation due to our simply failing to discover the principles underlying them. The prosthetic approach had the potential of replicating this aspect by integrating such a process, as it exists in the biological environment, into its own physical operation, and perform iterative maintenance or replacement of the biological process, until such a time as to be able to discover the underlying principles of those processes (which is a prerequisite for discovering how they contribute to the emergent computation occurring in the neuron) and thus for their inclusion in the informationalist-functionalist approach.

Also, I had by this time come across the existing approaches to Mind-Uploading and Whole-Brain Emulation (WBE), including Randal Koene’s minduploading.org, and realized that the notion of immortality through gradually replacing biological neurons with functional equivalents wasn’t strictly my own. I hadn’t yet come across Kurzweil’s thinking in regard to gradual uploading described in The Singularity is Near (where he suggests a similarly nanotechnological approach), and so felt that there was a gap in the extant literature in regard to how the emulated neurons or neural networks were to communicate with existing biological neurons (which is an essential requirement of gradual uploading and thus of any approach meant to facilitate subjective-continuity through substrate replacement). Thus my perceived role changed from the father of this concept to filling in the gaps and inconsistencies in the already-extant approach and in further developing it past its present state. This is another aspect informing my choice to work on and further varietize both the computational and physical-prosthetic approach—because this, along with the artificial-biological neural communication problem, was what I perceived as remaining to be done after discovering WBE.

The anticipated use of MEMS and NEMS in emulating the physical processes of the neurons included first simply electrical potentials, but eventually developed to include the chemical aspects of the neuron as well, in tandem with my increasing understanding of neuroscience. I had by this time come across Drexler’s Engines of Creation, which was my first introduction to antecedent proposals for immortality—specifically his notion of iterative cellular upkeep and repair performed by nanobots. I applied his concept of mechanosynthesis to the NRUs to facilitate the artificial synthesis of neurotransmitters. I eventually realized that the use of pre-synthesized chemical stores of neurotransmitters was a simpler approach that could be implemented via MEMS, thus being more inclusive for not necessitating nanotechnology as a required technological infrastructure. I also soon realized that we could eliminate the need for neurotransmitters completely by recording how specific neurotransmitters affect the nature of membrane-depolarization at the post-synaptic membrane and subsequently encoding this into the post-synaptic NRU (i.e., length and degree of depolarization or hyperpolarization, and possibly the diameter of ion-channels or differential opening of ion-channels—that is, some and not others) and assigning a discrete voltage to each possible neurotransmitter (or emergent pattern of neurotransmitters; salient variables include type, quantity and relative location) such that transmitting that voltage makes the post-synaptic NRU’s controlling-circuit implement the membrane-polarization changes (via changing the number of open artificial-ion-channels, or how long they remain open or closed, or their diameter/porosity) corresponding to the changes in biological post-synaptic membrane depolarization normally caused by that neurotransmitter.

In terms of the enhancement/self-modification side of things, I also realized during this period that mental augmentation (particularly the intensive integration of artificial-neural-networks with the existing brain) increases the efficacy of gradual uploading by decreasing the total portion of your brain occupied by the biological region being replaced—thus effectively making that portion’s temporary operational disconnection from the rest of the brain more negligible to concerns of subjective-continuity.

While I was thinking of the societal implications of self-modification and self-modulation in general, I wasn’t really consciously trying to do active conceptual work (e.g., working on designs for pragmatic technologies and methodologies as I was with limitless-longevity) on this side of the project due to seeing the end of death as being a much more pressing moral imperative than increasing our degree of self-determination. The 100,000 unprecedented calamities that befall humanity every day cannot wait; for these dying fires it is now or neverness.

Virtual Verification Experiments

The various alternative approaches to gradual substrate-replacement were meant to be alternative designs contingent upon various premises for what was needed to replicate functionality while retaining subjective-continuity through gradual replacement. I saw the various embodiments as being narrowed down through empirical validation prior to any whole-brain replication experiments. However, I now see that multiple alternative approaches—based, for example, on computational emulation (informationalist-functionalist) and physical replication (physicalist-functionalist) (these are the two main approaches thus far discussed) would have concurrent appeal to different segments of the population. The physicalist-functionalist approach might appeal to wide numbers of people who, for one metaphysical prescription or another, don’t believe enough in the computational reducibility of mind to bet their lives on it.

These experiments originally consisted of applying sensors to a given biological neuron, and constructing NRUs based on a series of variations on the two main approaches, running each and looking for any functional divergence over time. This is essentially the same approach outlined in the WBE Roadmap, which I was yet to discover at this point, that suggests a validation approach involving experiments done on single neurons before moving on to the organismal emulation of increasingly complex species up to and including the human. My thinking in regard to these experiments evolved over the next few years to also include the some novel approaches that I don’t think have yet been discussed in communities interested in brain-emulation.

An equivalent physical or computational simulation of the biological neuron’s environment is required to verify functional equivalence, as otherwise we wouldn’t be able to distinguish between functional divergence due to an insufficient replication-approach/NRU-design and functional divergence due to difference in either input or operation between the model and the original (caused by insufficiently synchronizing the environmental parameters of the NRU and its corresponding original). Isolating these neurons from their organismal environment allows the necessary fidelity (and thus computational intensity) of the simulation to be minimized by reducing the number of environmental variables affecting the biological neuron during the span of the initial experiments. Moreover, even if this doesn’t give us a perfectly reliable model of the efficacy of functional replication given the amount of environmental variables one expects a neuron belonging to a full brain to have, it is a fair approximator. Some NRU designs might fail in a relatively simple neuronal environment and thus testing all NRU designs using a number of environmental variables similar to the biological brain might be unnecessary (and thus economically prohibitive) given its cost-benefit ratio. And since we need to isolate the neuron to perform any early non-whole-organism experiments (i.e., on individual neurons) at all, having precise control over the number and nature of environmental variables would be relatively easy, as this is already an important part of the methodology used for normative biological experimentation anyways—because lack of control over environmental variables makes for an inconsistent methodology and thus for unreliable data.

And as we increase to the whole-network and eventually organismal level, a similar reduction of the computational requirements of the NRU’s environmental simulation is possible by replacing the inputs or sensory mechanisms (from single photocell to whole organs) with VR-modulated input. The required complexity and thus computational intensity of a sensorially mediated environment can be vastly minimized if the normative sensory environment of the organism is supplanted with a much-simplified VR simulation.

Note that the efficacy of this approach in comparison with the first (reducing actual environmental variables) is hypothetically greater because going from simplified VR version to the original sensorial environment is a difference, not of category, but of degree. Thus a potentially fruitful variation on the first experiment (physical reduction of a biological neuron’s environmental variables) would be not the complete elimination of environmental variables, but rather decreasing the range or degree of deviation in each variable, including all the categories and just reducing their degree.

Anecdotally, one novel modification conceived during this period involves distributing sensors (operatively connected to the sensory areas of the CNS) in the brain itself, so that we can viscerally sense ourselves thinking—the notion of metasensation: a sensorial infinite regress caused by having sensors in the sensory modules of the CNS, essentially allowing one to sense oneself sensing oneself sensing.

Another is a seeming refigurement of David Pearce’s Hedonistic Imperative—namely, the use of active NRU modulation to negate the effects of cell (or, more generally, stimulus-response) desensitization—the fact that the more times we experience something, or indeed even think something, the more it decreases in intensity. I felt that this was what made some of us lose interest in our lovers and become bored by things we once enjoyed. If we were able to stop cell desensitization, we wouldn’t have to needlessly lose experiential amplitude for the things we love.

In the next chapter I will describe the work I did in the first months of 2008, during which I worked almost wholly on conceptual varieties of the physically embodied prosthetic (i.e., physical-functionalist) approach (particularly in gradually replacing subsections of individual neurons to increase how gradual the cumulative procedure is) for several reasons:

The original utility of ‘hedging our bets’ as discussed earlier—developing multiple approaches increases evolutionary diversity; thus, if one approach fails, we have other approaches to try.

I felt the computational side was already largely developed in the work done by others in Whole-Brain Emulation, and thus that I would be benefiting the larger objective of indefinite longevity more by focusing on those areas that were then comparatively less developed.

The perceived benefit of a new approach to subjective-continuity through a substrate-replacement procedure aiming to increase the likelihood of gradual uploading’s success by increasing the procedure’s cumulative degree of graduality. The approach was called Iterative Gradual Replacement and consisted of undergoing several gradual-replacement procedures, wherein the class of NRU used becomes progressively less similar to the operational modality of the original, biological neurons with each iteration; the greater the number of iterations used, the less discontinuous each replacement-phase is in relation to its preceding and succeeding phases. The most basic embodiment of this approach would involve gradual replacement with physical-functionalist (prosthetic) NRUs that in turn are then gradually replaced with informational-physicalist (computational/emulatory) NRUs. My qualms with this approach today stem from the observation that the operational modalities of the physically embodied NRUs seem as discontinuous in relation to the operational modalities of the computational NRUs as the operational modalities of the biological neurons does. The problem seems to result from the lack of an intermediary stage between physical embodiment and computational (or second-order) embodiment.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Embedded Processor. (2013). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/185535/embedded-processor

Jerome, P. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. Journal or Neuroscience Methods, 2 (1), 19-31.

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013 from http://www.jiafuwan.net/download/cyber_physical_systems.pdf

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

Immortality: Material or Ethereal? Nanotech Does Both! – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
May 11, 2013
******************************

This essay is the second chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first chapter was previously published on The Rational Argumentator as “The Moral Imperative and Technical Feasibility of Defeating Death“.

In August 2006 I conceived of the initial cybernetic brain-transplant procedure. It originated from a very simple, even intuitive sentiment: if there were heart and lung machines and prosthetic organs, then why couldn’t these be integrated in combination with modern (and future) robotics to keep the brain alive past the death of its biological body? I saw a possibility, felt its magnitude, and threw myself into realizing it. I couldn’t think of a nobler quest than the final eradication of involuntary death, and felt willing to spend the rest of my life trying to make it happen.

First I collected research on organic brain transplantation, on maintaining the brain’s homeostatic and regulatory mechanisms outside the body (or in this case without the body), on a host of prosthetic and robotic technologies (including sensory prosthesis and substitution), and on the work in Brain-Computer-Interface technologies that would eventually allow a given brain to control its new, non-biological body—essentially collecting the disparate mechanisms and technologies that would collectively converge to facilitate the creation of a fully cybernetic body to house the organic brain and keep it alive past the death of its homeostatic and regulatory organs.

I had by this point come across online literature on Artificial Neurons (ANs) and Artificial Neural Networks (ANNs), which are basically simplified mathematical models of neurons meant to process information in a way coarsely comparable to them. There was no mention in the literature of integrating them with existing neurons or for replacing existing neurons towards the objective of immortality ; their use was merely as an interesting approach to computation particularly optimal to certain situations. While artificial neurons can be run on general-purpose hardware (massively parallel architectures being the most efficient for ANNs, however), I had something more akin to neuromorphic hardware in mind (though I wasn’t aware of that just yet).

At its most fundamental level, Artificial Neurons need not even be physical at all. Their basic definition is a mathematical model roughly based on neuronal operation – and there is nothing precluding that model from existing solely on paper, with no actual computation going on. When I discovered them, I had thought that a given artificial neuron was a physically-embodied entity, rather than a software simulation. – i.e., an electronic device that operates in a way comparable to biological neurons.  Upon learning that they were mathematical models however, and that each AN needn’t be a separate entity from the rest of the ANs in a given AN Network, I saw no problem in designing them so as to be separate physical entities (which they needed to be in order to fit the purposes I had for them – namely, the gradual replacement of biological neurons with prosthetic functional equivalents). Each AN would be a software entity run on a piece of computational substrate, enclosed in a protective casing allowing it to co-exist with the biological neurons already in-place. The mathematical or informational outputs of the simulated neuron would be translated into biophysical, chemical, and electrical output by operatively connecting the simulation to an appropriate series of actuators (which could range from being as simple as producing electric fields or currents, to the release of chemical stores of neurotransmitters) and likewise a series of sensors to translate biophysical, chemical, and electrical properties into the mathematical or informational form they would need to be in to be accepted as input by the simulated AN.

Thus at this point I didn’t make a fundamental distinction between replicating the functions and operations of a neuron via physical embodiment (e.g., via physically embodied electrical, chemical, and/or electromechanical systems) or via virtual embodiment (usefully considered as 2nd-order embodiment, e.g., via a mathematical or computational model, whether simulation or emulation, run on a 1st-order physically embodied computational substrate).

The potential advantages, disadvantages, and categorical differences between these two approaches were still a few months away. When I discovered ANs, still thinking of them as physically embodied electronic devices rather than as mathematical or computational models, I hadn’t yet moved on to ways of preserving the organic brain itself so as to delay its organic death. Their utility in constituting a more permanent, durable, and readily repairable supplement for our biological neurons wasn’t yet apparent.

I initially saw their utility as being intelligence amplification, extension and modification through their integration with the existing biological brain. I realized that they were categorically different than Brain-Computer Interfaces (BCIs) and normative neural prosthesis for being able to become an integral and continuous part of our minds and personalities – or more properly the subjective, experiential parts of our minds. If they communicated with single neurons and interact with them on their own terms—if the two were operationally indistinct—then they could become a continuous part of us in a way that didn’t seem possible for normative BCI due to their fundamental operational dissimilarity with existing biological neural networks. I also collected research on the artificial synthesis and regeneration of biological neurons as an alternative to ANs. This approach would replace an aging or dying neuron with an artificially synthesized but still structurally and operationally biological neuron, so as to maintain the aging or dying neuron’s existing connections and relative location. I saw this procedure (i.e., adding artificial or artificially synthesized but still biological neurons to the existing neurons constituting our brains, not yet for the purposes of gradually replacing the brain but instead for the purpose of mental expansion and amplification) as not only allowing us to extend our existing functional and experiential modalities (e.g., making us smarter through an increase in synaptic density and connectivity, and an increase in the number of neurons in general) but even to create fundamentally new functional and experiential modalities that are categorically unimaginable to us now via the integration of wholly new Artificial Neural Networks embodying such new modalities. Note that I saw this as newly possible with my cybernetic-body approach because additional space could be made for the additional neurons and neural networks, whereas the degree with which we could integrate new, artificial neural networks in a normal biological body would be limited by the available volume of the unmodified skull.

Before I discovered ANs, I speculated in my notes as to whether the “bionic nerves” alluded to in some of the literature I had collected by this point (specifically regarding BCI, neural prosthesis, and the ability to operatively connect a robotic prosthetic extremity – e.g., an arm or a leg – via BCI) could be used to extend the total number of neurons and synaptic connections in the biological brain. This sprang from my knowledge on the operational similarities between neurons and muscle cells, both of the larger class of excitable cells.

Kurzweil’s cyborgification approach (i.e., that we could integrate non-biological systems with our biological brains to such an extent that the biological portions become so small as to be negligible to our subjective-continuity when they succumb to cell-death, thus achieving effective immortality without needing to actually replace any of our existing biological neurons at all) may have been implicit in this concept. I envisioned our brains increasing in size many times over and thus that the majority of our mind would be embodied or instantiated in larger part by the artificial portion than by the biological portions; the fact that the degree with which the loss of a part of our brain will affect our emergent personalities depends on how big (other potential metrics alternative to size include connectivity and the degree with which other systems depend on that potion for their own normative operation) that lost part is in comparison to the total size of the brain, the loss of a lobe being much worse than the loss of a neuron, follows naturally from this initial premise. The lack of any explicit statement of this realization in my notes during this period, however, makes this mere speculation.

It wasn’t until November 11, 2006, that I had the fundamental insight underlying mind-uploading—that the replacement of existing biological neurons with non-biological functional equivalents that maintain the existing relative location and connection of such biological neurons could very well facilitate maintaining the memory and personality embodied therein or instantiated thereby—essentially achieving potential technological immortality, since the approach is based on replacement and iterations of replacement-cycles can be run indefinitely. Moreover, the fact that we would be manufacturing such functional equivalents ourselves means that we could not only diagnose potential eventual dysfunctions easier and with greater speed, but we could manufacture them so as to have readily replaceable parts, thus simplifying the process of physically remediating any such potential dysfunction or operational degradation, even going so far as to include systems for the safe import and export of replacement components or as to make all such components readily detachable, so that we don’t have to cause damage to adjacent structures and systems in the process of removing a given component.

Perhaps it wasn’t so large a conceptual step from knowledge of the existence of computational models of neurons to the realization of using them to replace existing biological neurons towards the aim of immortality. Perhaps I take too much credit for independently conceiving both the underlying conceptual gestalt of mind-uploading, as well as some specific technologies and methodologies for its pragmatic technological implementation. Nonetheless, it was a realization I arrived at on my own, and was one that I felt would allow us to escape the biological death of the brain itself.

While I was aware (after a little more research) that ANNs were mathematical (and thus computational) models of neurons, hereafter referred to as the informationalist-functionalist approach, I felt that a physically embodied (i.e., not computationally emulated or simulated) prosthetic approach, hereafter referred to as the physicalist-functionalist approach, would be a better approach to take. This was because even if the brain were completely reducible to computation, a prosthetic approach would necessarily facilitate the computation underlying the functioning of the neuron (as the physical operations of biological neurons do presently), and if the brain proved to be computationally irreducible, then the prosthetic approach would in such a case presumably preserve whatever salient physical processes were necessary. So the prosthetic approach didn’t necessitate the computational-reducibility premise – but neither did it preclude such a view, thereby allowing me to hedge my bets and increase the cumulative likelihood of maintaining subjective-continuity of consciousness through substrate-replacement in general.

This marks a telling proclivity recurrent throughout my project: the development of mutually exclusive and methodologically and/or technologically alternate systems for a given objective, each based upon alternate premises and contingencies – a sort of possibilizational web unfurling fore and outward. After all, if one approach failed, then we had alternate approaches to try. This seemed like the work-ethic and conceptualizational methodology that would best ensure the eventual success of the project.

I also had less assurance in the sufficiency of the informational-functionalist approach at the time, stemming mainly from a misconception with the premises of normative Whole-Brain Emulation (WBE). When I first discovered ANs, I was more dubious at that point about the computational reducibility of the mind because I thought that it relied on the premise that neurons act in a computational fashion (i.e., like normative computational paradigms) to begin with—thus a conflation of classical computation with neural operation—rather than on the conclusion, drawn from the Church-Turing thesis, that mind is computable because the universe is. It is not that the brain is a computer to begin with, but that we can model any physical process via mathematical/computational emulation and simulation. The latter would be the correct view, and I didn’t really realize that this was the case until after I had discovered the WBE roadmap in 2010. This fundamental misconception allowed me, however, to also independently arrive at the insight underlying the real premise of WBE:  that combining premise A – that we had various mathematical computational models of neuron behavior – with premise B – that we can perform mathematical models on computers – ultimately yields the conclusion C – that we can simply perform the relevant mathematical models on computational substrate, thereby effectively instantiating the mind “embodied” in those neural operations while simultaneously eliminating many logistical and technological challenges to the prosthetic approach. This seemed both likelier than the original assumption—conflating neuronal activity with normative computation, as a special case not applicable to, say, muscle cells or skin cells, which wasn’t the presumption WBE makes at all—because this approach only required the ability to mathematically model anything, rather than relying on a fundamental equivalence between two different types of physical system (neuron and classical computer). The fact that I mistakenly saw it as an approach to emulation that was categorically dissimilar to normative WBE also helped urge me on to continue conceptual development of the various sub-aims of the project after having found that the idea of brain emulation already existed, because I thought that my approach was sufficiently different to warrant my continued effort.

There are other reasons for suspecting that mind may not be computationally reducible using current computational paradigms – reasons that rely on neither vitalism (i.e., the claim that mind is at least partially immaterial and irreducible to physical processes) nor on the invalidity of the Church-Turing thesis. This line of reasoning has nothing to do with functionality and everything to do with possible physical bases for subjective-continuity, both a) immediate subjective-continuity (i.e., how can we be a unified, continuous subjectivity if all our component parts are discrete and separate in space?), which can be considered as the capacity to have subjective experience, also called sentience (as opposed to sapience, which designated the higher cognitive capacities like abstract thinking) and b) temporal subjective-continuity (i.e., how do we survive as continuous subjectivities through a process of gradual substrate replacement?). Thus this argument impacts the possibility of computationally reproducing mind only insofar as the definition of mind is not strictly functional but is made to include a subjective sense of self—or immediate subjective-continuity. Note that subjective-continuity through gradual replacement is not speculative (just the scale and rate required to sufficiently implement it are), but rather has proof of concept in the normal metabolic replacement of the neuron’s constituent molecules. Each of us is a different person materially than we were 7 years ago, and we still claim to retain subjective-continuity. Thus, gradual replacement works; it is just the scale and rate required that are under question.

This is another way in which my approach and project differs from WBE. WBE equates functional equivalence (i.e., the same output via different processes) with subjective equivalence, whereas my approach involved developing variant approaches to neuron-replication-unit design that were each based on a different hypothetical basis for instantive subjective continuity.

 Are Current Computational Paradigms Sufficient?

Biological neurons are both analog and binary. It is useful to consider a 1st tier of analog processes, manifest in the action potentials occurring all over the neuronal soma and terminals, with a 2nd tier of binary processing, in that either the APs’ sum crosses the threshold value needed for the neuron to fire, or it falls short of it and the neuron fails to fire. Thus the analog processes form the basis of the digital ones. Moreover, the neuron is in an analog state even in the absence of membrane depolarization through the generation of the resting-membrane potential (maintained via active ion-transport proteins), which is analog rather than binary for always undergoing minor fluctuations due to it being an active process (ion-pumps) that instantiates it. Thus the neuron at any given time is always in the process of a state-transition (and minor state-transitions still within the variation-range allowed by a given higher-level static state; e.g., resting membrane potential is a single state, yet still undergoes minor fluctuations because the ions and components manifesting it still undergo state-transitions without the resting-membrane potential itself undergoing a state-transition), and thus is never definitively on or off. This brings us to the first potential physical basis for both immediate and temporal subjective-continuity. Analog states are continuous, and the fact that there is never a definitive break in the processes occurring at the lower levels of the neuron represents a potential basis for our subjective sense of immediate and temporal continuity.

Paradigms of digital computation, on the other hand, are at the lowest scale either definitively on or definitively off. While any voltage within a certain range will cause the generation of an output, it is still at base binary because in the absence of input the logic elements are not producing any sort of fluctuating voltage—they are definitively off. In binary computation, the substrates undergo a break (i.e., region of discontinuity) in their processing in the absence of inputs, and are in this way fundamentally dissimilar to the low-level operational modality of biological neurons by virtue of being procedurally discrete rather than procedurally continuous.

If the premise that the analog and procedurally continuous nature of neuron-functioning (including action potentials, resting-membrane potential, and metabolic processes that form a potential basis for immediate and temporal subjective-continuity) holds true, then current digital paradigms of computation may prove insufficient at maintaining subjective-continuity if used as the substrate in a gradual-replacement procedure, while still being sufficient to functionally replicate the mind in all empirically verifiable metrics and measures. This is due to both the operational modality of binary processing (i.e., lack of analog output) and the procedural modality of binary processing (the lack of temporal continuity or lack of producing minor fluctuations in reference to a baseline state when in a resting or inoperative state). A logic element could have a fluctuating resting voltage rather than the absence of any voltage and could thus be procedurally continuous while still being operationally discrete by producing solely binary outputs.

So there are two possibilities here. One is that any physical substrate used to replicate a neuron (whether via 1st-order embodiment a.k.a prosthesis/physical-systems, or via 2nd-order embodiment a.k.a computational emulation or simulation) must not undergo a break in its operation in the absence of input, because biological neurons do not, and this may be a potential basis for instantive subjective-continuity, but rather must produce a continuous or uninterrupted signal when in a “steady-state” (i.e., in the absence of inputs). The second possibility includes all the premises of the first, but adds that such an inoperative-state signal (or “no-inputs”-state signal) undergo minor fluctuations (because then a steady stream of causal interaction is occurring – e.g., producing a steady signal could be as discontinuous as no signal, like “being on pause”.

Thus one reason for developing the physicalist-functionalist (i.e., physically embodied prosthetic) approach to NRU design was a hedging of bets, in the case that a.) current computational substrates fail to replicate a personally continuous mind for the reasons described above, or b.) we fail to discover the principles underlying a given physical process—thus being unable to predictively model it—but still succeed in integrating them with the artificial systems comprising the prosthetic approach until such a time as to be able to discover their underlying principles, or c.) in the event that we find some other, heretofore unanticipated conceptual obstacle to computational reducibility of mind.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Bibliography

Copeland, J. B. (2008). Neural Network. In The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). Retrieved February 28, 2013. from http://plato.stanford.edu/archives/fall2008/entries/church-turing

Crick, F. (1984 Nov 8-14). Memory and molecular turnover. In Nature, 312(5990)(101). PMID: 6504122

Criterion of Falsifiability, Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/201091/criterion-of-falsifiability

Drexler, K. E. (1986). Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books.

Grabianowski (2007). How Brain-computer Interfaces Work. Retrieved February 28, 2013, from http://computer.howstuffworks.com/brain-computer-interface.htm

Koene, R. (2011). The Society of Neural Prosthetics and Whole Brain Emulation Science. Retrieved February 28, 2013, from http://www.minduploading.org/

Martins, N. R., Erlhagen, W. & Freitas Jr., R. A. (2012). Non-destructive whole-brain monitoring using nanorobots: Neural electrical data rate requirements. International Journal of Machine Consciousness, 2011. Retrieved February 28, 2013, from http://www.nanomedicine.com/Papers/NanoroboticBrainMonitoring2012.pdf.

Narayan, A. (2004). Computational Methods for NEMS. Retrieved February 28, 2013, from http://nanohub.org/resources/407.

Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. Retrieved February 28, 2013, from Whole Brain Emulation: A Roadmap, Technical Report #2008-3.

Star, E. N., Kwiatkowski, D. J. & Murthy, V. N. (2002). Rapid turnover of actin in dendritic spines and its regulation by activity. Nature Neuroscience, 5 , 239-246.

Tsien, J. Z., Rampon, C., Tang,Y.P. & Shimizu, E. (2000). NMDA receptor dependent synaptic reinforcement as a crucial process for memory consolidation. Science, 290 , 1170-1174.

Vladimir, Z. (2013). Neural Network. In Encyclopædia Britannica Online Academic Edition. Retrieved February 28, 2013, from http://www.britannica.com/EBchecked/topic/410549/neural-network

Wolf, W. & (March 2009). Cyber-physical Systems. In Embedded Computing. Retrieved February 28, 2013, from http://www.jiafuwan.net/download/cyber_physical_systems.pdf