The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.
U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.
Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.
Adam Smith’s appreciation for the Stoic emperor’s writings is evident in his own work.
Who Was Marcus Aurelius?
Marcus Aurelius Antoninus Augustus was the last of the five good emperors of Rome. He was born in 121 AD, reluctantly became emperor in 161 AD, and reigned for 19 years until his death in 180 AD. His reign was punctuated by numerous wars during which he repelled Rome’s enemies in long campaigns. When not at the frontiers of the empire, he spent his time administering the law, focusing his attention particularly on the guardianship of orphans, the manumission of slaves, and choosing city councilors.
Lord Acton memorably stated, “Power corrupts, absolute power corrupts, absolutely.” Lord Acton’s aphorism is, for the most part, true, but there was one exception to it in history: Marcus Aurelius. He famously had a keen interest in philosophy. Perpetually practicing self-control and moderation in all aspects of his life, he was the closest any person ever came to embodying Plato’s ideal of the “philosopher king.”
While on the front lines of his campaign against the German tribes, Marcus Aurelius wrote his own personal diary. This was originally titled Ta Eis Heauton, meaning To Himself in Greek. Subsequent translations of the text changed the title numerous times; we now know it as Meditations. In Meditations, Marcus Aurelius writes his personal views on the Stoic philosophy.
He focuses heavily on the themes of finding one’s place in the cosmic balance of the universe, the importance of analyzing your actions, and being a good person. Asserting that one should be judged first and foremost on their actions, he decisively urged us to “waste no more time arguing about what a good man should be. Be one.” Meditations is a masterpiece of Stoic philosophy, brimming with insightful, emotional and, most importantly, useful observations on morality and the human condition.
Who Was Adam Smith?
Adam Smith was a Scottish moral philosopher who is renowned as one of the first modern economists. He was born in 1723 in Kirkcaldy and died in 1790. He is famous for his two seminal works, The Wealth of Nations and The Theory of Moral Sentiments. His work was massively influential on classical liberal thought as he was one of the first defenders of the free market.
In The Wealth of Nations and The Theory of Moral Sentiments, Smith articulated a persuasive case for the efficacy and morality of a free-market commercial society. Ludwig Von Mises, speaking about Smith’s works, wrote that they “presented the essence of the ideology of freedom, individualism, and prosperity, with admirable clarity and in an impeccable literary form.” Classical liberal economist Milton Friedman often wore a tie bearing a portrait of Adam Smith to formal events.
Adam Smith’s Readings of Marcus Aurelius
These two figures lived in vastly different times, under vastly different circumstances, so how did Marcus Aurelius ever influence Adam Smith? The answer lies in the ancient philosophy of Stoicism.
Stoicism was one of the three major schools of Greek philosophy in the ancient world. It was founded in Athens in the 3rd century BC by a man named Zeno of Citium. The name “Stoic” was given to the followers of Zeno, who used to congregate to hear him teach at the Athenian Agora, under the colonnade known as the Stoa Poikile. Over time, Stoicism expanded and developed sophisticated views on metaphysics, epistemology, and ethics.
While Stoicism posits numerous views on a huge variety of topics, its most interesting and relevant observations are on ethics. The Stoics were concerned with perfecting self-control which allowed for virtuous behavior. They believed that, through self-control, one could be free of negative emotions and passions which blinded objective judgment.
With a peaceful mind, the Stoics thought, people could live according to the universal reason of the world and practice a virtuous life. Marcus Aurelius described the ideal Stoic life in book three of Meditations, writing, “peace of mind in the evident conformity of your actions to the laws of reason, and peace of mind under the visitations of a destiny you cannot control.”
Adam Smith was educated at the University of Glasgow where he studied under Francis Hutcheson. Hutcheson was a Scottish intellectual and a leading representative of the Christian Stoicism movement during the Scottish Enlightenment. He hosted private noontime classes on Stoicism which Adam Smith often attended. Smith’s preference for Marcus Aurelius was encouraged by Hutcheson, who published his own translation of Meditations.
In The Theory of Moral Sentiments, Smith referred to Marcus Aurelius as “the mild, the humane, the benevolent Antoninus,” demonstrating his deep admiration for the Stoic emperor. Marcus Aurelius influenced Adam Smith in three main areas: the idea of an inner conscience; the importance of self-control; and in his famous analogy of the “Invisible Hand.”
Our Inner Conscience
Both Marcus Aurelius and Adam Smith believed that the key to understanding morality was through self-scrutiny and sympathy for others.
Marcus Aurelius wrote Meditations in the form of a self-reflective dialogue with his inner self. He thought that moral conviction lay within “the very god that is seated in you, bringing your impulses under its control, scrutinizing your thoughts.’’ He interchangeably referred to this inner god as the soul or the helmsman and believed that it is a voice within you that attempts to sway you from immoral doings; we now call this a conscience.
Similarly, Smith emphasized the role of people’s innermost thoughts. A key aspect of Smith’s moral philosophy in The Theory of Moral Sentiments is the impartial spectator. Smith theorized that morality could be understood through the medium of sympathy. He thought that before people acted they ought to look for the approval of an impartial spectator.
“But though man has… been rendered the immediate judge of mankind, he has been rendered so only in the first instance; and an appeal lies from his sentence to a much higher tribunal, to the tribunal of their own consciences, to that of the supposed impartial and well-informed spectator, to that of the man within the breast, the great judge and arbiter of their conduct.”
The Importance of Self-Control
The Stoics listed four “cardinal virtues” — wisdom, justice, courage, and temperance — for which they held great reverence. These were believed to be expressions and manifestations of a single indivisible virtue. Smith used slightly different names, but he endorsed the same set of virtues and the idea that they were all facets of one indivisible virtue.
Smith and Aurelius had a mutual appreciation for the virtue of self-control. They both believed in an impartial, self-scrutinizing conscience that guided morality: while Aurelius called it the God Within, Smith called it the Impartial Spectator.
Marcus Aurelius said, “You have power over your mind — not outside events. Realize this, and you will find strength.” The primacy of self-control is intrinsic to the Stoic philosophy. In a similar vein of thought, Smith writes that “self-command is not only itself a great virtue, but from all other virtues seem to derive their principal lustre.” This respect for self-control was encouraged and cultivated by Smith’s Impartial Spectator and Marcus Aurelius’ Inner God.
The Invisible Hand
Marcus Aurelius argues that we must work together in common cooperation in order to improve humanity as a whole. He argues that we “were born to work together.” Aurelius stressed the vital nature of human cooperation.
“Constantly think of the universe as one living creature, embracing one being and soul; how all is absorbed into the one consciousness of this living creature; how it compasses all things with a single purpose, and how all things work together to cause all that comes to pass, and their wonderful web and texture.”
In The Wealth of Nations and The Theory of Moral Sentiments, Adam Smith’s defense of the free market is expressed through the analogy of the Invisible Hand. Smith argues that in a society of free exchange and free markets, people must sympathize with one another and understand how best to benefit their fellow man in order to better their own situation.
“It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.”
The transaction will not occur unless the parties involved demonstrate their sympathy for the interests of others. In the analogy of the Invisible Hand, Smith argues that we must think of others before ourselves and consider how best to serve our fellow neighbor. This famous passage bears a striking resemblance to the previous passage by Marcus Aurelius who also argues for the importance of conscious cooperation among people for the common good.
We Are All Standing on the Shoulders of Giants
A Roman emperor seems like an unlikely intellectual influence for a classical liberal thinker such as Adam Smith. Upon closer inspection, however, Smith and Aurelius are like two peas in a pod: both men believed that the root of morality lies within the self-scrutiny of one’s conscience; both believe in the primacy of the virtue of self-control; and both believe in the importance of sympathy as a tool for cooperation and the betterment of civilized society.
No thinker is entirely alone in their pursuit of truth. All people discover truth by building upon the previous discoveries of others. This explains how an emperor came to influence so strongly an Enlightenment moral philosopher and economist more than a thousand years after he had passed away. I believe that the best expression of the development of such ideas was written by a medieval philosopher and bishop, John of Salisbury, who spoke of the wisdom of Bernard of Chartres:
“He pointed out that we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.”
We are all dwarfs standing on the shoulders of giants in the pursuit of the system of natural liberty and prosperity that Adam Smith sought during his lifetime.
Paul Meany is a student at Trinity College Dublin studying Ancient and Medieval History and Culture.
Academics like to say that we teach “critical thinking” without thinking too critically about what it means to think critically.
Being Critical, Not Thinking Critically
Too often in practice, people equate critical thinking with merely being skeptical of whatever they hear. Or they will interpret it to mean that, when confronted with someone who says something that they disagree with, they either:
a) stop listening (and perhaps then start shouting),
b) find a way to squeeze the statement into our pre-existing belief system (if we can’t we stop thinking about it), or
c) attempt to “educate” the speaker about why their statement or belief system is flawed. When this inevitably fails we stop speaking to them, at least about the subject in question.
Ultimately, each of these responses leaves us exactly where we started, and indeed stunts our intellectual growth. I confess that I do a, b, and c far too often (except I don’t really shout that much).
To me, critical thinking means, at a minimum, questioning a belief system (especially my own) by locating the premises underlying a statement or conclusion, whether we agree with it or not, and asking:
1) whether or not the thinker’s conclusions follow from those premises,
2) whether or not those premises are “reasonable,” or
3) whether or not what I consider reasonable is “reasonable” and so on.
This exercise ranges from hard to excruciatingly uncomfortable – at least when it comes to examining my own beliefs. (I’ve found that if I dislike a particular conclusion it’s hard to get myself to rigorously follow this procedure; but if I like a conclusion it’s often even harder.)
Teaching Critical Thinking
Fortunately, people have written articles and books that offer good criticisms of most of my current beliefs. Of course, it’s then up to me to read them, which I don’t do often enough. And so, unfortunately, I don’t think critically as much as I should…except when I teach economics.
It’s very important, for example, for a student to critically question her teacher, but that’s radically different from arguing merely to win. Critical thinking is argument for the sake of better understanding, and if you do it right, there are no losers, only winners.
Once in a while, a student speaks up in class and catches me in a contradiction – perhaps I’ve confused absolute advantage with comparative advantage – and that’s an excellent application of genuine critical thinking. As a result we’re both now thinking more clearly. But when a student or colleague begins a statement with something like “Well, you’re entitled to your opinion, but I believe…” that person may be trying to be critical (of me) but not in (or of) their thinking.
It may not be the best discipline for this, but I believe economics does a pretty good job of teaching critical thinking in the sense of #1 (logical thinking). Good teachers of economics will also strategically address #2 (evaluating assumptions), especially if they know something about the history of economic ideas.
Economics teachers with a philosophical bent will sometimes address #3 but only rarely (otherwise they’d be trading off too much economic content for epistemology). In any case, I don’t think it’s possible to “get to the bottom” of what is “reasonable reasonableness” and so on because what ultimately is reasonable may, for logical or practical reasons, always lie beyond our grasp.
I could be wrong about that or indeed any of this. But I do know that critical thinking is a pain in the neck. And that I hope is a step in the right direction.
Sanford (Sandy) Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.
How do we know what we know? Philosophers have pondered this question from time immemorial. Julian Jaynes, in his classic book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, speculates that before the development of modern human consciousness, people believed they were informed by voices in their heads. Today, an alarming number of people are responding to voices on the Internet in similarly uncritical fashion.
As Jesuit scholar John Culkin pointed out in his seminal 1967 Saturday Reviewarticle, “A Schoolman’s Guide to Marshall McLuhan,” “We shape our tools and, thereafter, they shape us.” Examining history through this lens, one can identify seven great epochs in mankind’s intellectual and social evolution. Each is characterized by the way a new technology changed not only how we think about the world, but our actual thought processes. These are:
1) Spoken language, which first led to the primacy of mythology;
2) Written language, which bequeathed to us holy books and the world’s great religions;
3) The printing press, which spread literacy to the elites who went on to birth the nation state, the Reformation, the Enlightenment, and the U.S. Constitution;
4) The telegraph, which transformed pamphlets and broadsheets into modern newspapers, whose agenda-setting influence goaded America to “Remember the Maine” and become an imperialist power;
5) Radio, which placed broadcast propaganda at the service of central planners, progressives, and tyrants;
6) Television, which propelled the rising tide of the counterculture, environmentalism, and globalism; and
7) The Internet, a nascent global memory machine that puts the Library of Alexandria to shame, yet fits in everyone’s pocket.
Reason’s primacy is a fragile thing.
At each transition, the older environment and way of thinking does not disappear. Rather, it adopts an extreme defensive crouch as it attempts to retain power over men’s minds. It is the transition from the Age of Television to the Age of the Internet that concerns us here, as it serves up an often-toxic brew of advocacy and click-bait journalism competing to feed the masses an avalanche of unverifiable information, often immune to factual or logical refutation.
Rational epistemology holds that reason is the chief test and source of knowledge, and that each of us is not just capable of practicing it, but is responsible for doing so. Reason flowered when the Enlightenment overturned the ancient wisdom of holy books, undermining the authority of clerics and the divine right of kings. Wherever reason is widely practiced and healthy skepticism is socially accepted, error becomes self-correcting (rather than self-amplifying, as under a system based on superstition), as new propositions are tested, while old propositions get reexamined as new facts come to light.
So now that the voices have returned to our heads, we are inadequately prepared to defend against them.
Yet, reason’s primacy is a fragile thing. As increasingly potent electronic media confer influence on new voices, formerly-dominant media and governing elites fight a rearguard action to regain their status as ultimate arbiters of knowledge and what matters. Goebbels proved that a lie repeated loudly and frequently in a culture that punished skepticism became accepted as truth. We all know how that turned out.
Revulsion at the carnage of the Second World War crested with the counterculture revolution driven by the first TV generation. By the time the dust settled, its thought leaders had grabbed control of the academy, reshaping it along postmodern lines that included an assault on language that critics dubbed political correctness. This was intentionally designed to constrain what people can think by restraining what they can say. The intention may have been to avert a repeat of the horrors of the 20th century, but the result was to strip much of the educated populace of the mental tools needed to ferret out error.
So now that the voices have returned to our heads, we are inadequately prepared to defend against them. Digitally streamed into every nook and cranny of our ubiquitously connected lives, these voices are filtered by our own self-reinforcing preferences and prejudices, becoming our own in the process. The result is an ongoing series of meme-driven culture wars where the shouting only gets louder on all sides.
So we come back to the question: How do we know what we know?
What causes crime? Is autism linked to vaccines? Should GMOs be banned? Is global warming “settled science”? These are more than factual questions. Responses to them signal identification with an array of ever more finely differentiated identity groups set at each other’s throats. For those who wish to divide and rule, that’s the whole point.
In a cruel irony, this global outbreak of media-induced public schizophrenia has even empowered jihadists bent on taking the world back to the 10th century using the idea-spreading tools of the Internet to challenge a Western Civilization rapidly losing its mojo.
So we come back to the question: How do we know what we know? At the present time, we don’t. And therein lies the problem.
Libertarians are understandably frustrated with the state of higher education today. Libertarian ideas often do not get covered, or are covered unfairly. Faculty are overwhelmingly left-of-center, and government subsidies have driven up costs, leading to higher student debt.
These are legitimate concerns of course. However, the solution to these problem is not to abolish the institution of tenure. Tenure is not anti-liberty, and it provides important protections for those who are libertarians (and conservatives) in academia. In addition, it has some efficiency properties that explain why it has survived and might well do so even in a world where the state had no role in higher ed.
There are many potential objections to tenure. For some, the idea that a tenured professor cannot be fired strikes them as a rejection of the free market. Others believe that tenure is a way of protecting leftist faculty, even if their ideas are wrong-headed, and students don’t wish to hear them. In that way, tenure is a kind of monopoly protection for bad ideas. Finally, people across the political spectrum believe that tenure creates so-called “deadwood” faculty who, once they are tenured, no longer have to care about their teaching or research.
First, let’s dispatch a common misconception: it is not true that tenured professors cannot be fired. Tenured professors can be fired for a variety of reasons. What tenure does is limit what counts as a valid reason for dismissal in order to protect academic freedom. A tenured professor can be fired if caught plagiarizing, or found guilty of sexual or other forms of harassment, or convicted of violent crime. But if she can be fired for writing an article that the dean disapproves of, she cannot perform her job. And that is where tenure comes in.
Understanding why tenure is a desirable institution requires us to remember the purpose of a university. Universities are, ideally, institutional arrangements that enable scholars to engage in the activities of seeking the truth and then sharing the fruits of our scholarship with students, other scholars, and perhaps the general public.
Essential to that project is that scholars are free to seek the truth as we see it, without interference by others who have different goals. Of course, scholars must play by some very simple rules of the game: do not lie or cheat; do not distort your data or the arguments of your sources; be transparent about conflicts of interest; do not engage in personal attacks or the use of force, among others.
If this sounds familiar, that’s because the search for truth is a discovery process analogous to the market. Just as entrepreneurs in a market require the freedom to discover value where their best judgment takes them, subject to rules against force and fraud, so do scholars in a university require the freedom to discover truth where our best judgment takes us.
Tenure protects scholars like us from interference with our attempts to discover truth. Scholars cannot engage in truth-seeking if we’re facing retaliation from people who don’t like where our research leads. A university cannot be a university without robust protection of the open exchange of ideas and the freedom of each scholar to research in his or her field without intimidation.
By ruling out the possibility of firing a professor simply for the content of her beliefs, tenure ensures that the university will be what Michael Polanyi called “a republic of science,” in which truth-seeking is the highest standard.
Skeptics might argue that even if tenure were abolished, faculty still wouldn’t leave their current jobs because they would find it difficult to get hired elsewhere. But that’s not the point. The point is that we cannot do our jobs without a credible guarantee of academic freedom, and tenure is one way to secure that.
Tenure protects academic freedom in three distinct ways. First, when we engage in research and publishing, we can’t be worried that some administrator, trustee, politician, or even a student activist will find our work offensive and retaliate against us. This will have a chilling effect on our ability to seek the truth, which is our job as college professors. There are numerous examples of libertarian and conservative faculty facing just these sorts of threats, and tenure is the primary reason those threats are empty.
Second, when we construct and teach our curricula, we can’t worry that any of the usual suspects will take offense, or try to substitute their judgment for ours. Finally, when participating in institutional decision making about academic matters, we can’t be afraid to call shenanigans on various administrator-driven fads (of which there are many) that would undermine our ability to engage in research and teaching.
Although we are open to alternative institutional processes if they could be shown to adequately protect academic freedom, abolishing tenure in their absence is a dicey proposition. Absent tenure, it is libertarians and conservatives who would be the first to be persecuted, censored, or silenced.
Politically correct ideas don’t need the protection of tenure because they are popular; tenure protects ideas that are not. Abolishing it would give still more power to the activists and administrators who seek to create an ideologically uniform academy.
Given those concerns, how big is the downside to tenure? If the complaint is that some faculty’s research productivity declines after tenure, then an easy fix is to have continued productivity tied to merit raises. Nothing about the institution of tenure precludes post-tenure reviews and merit pay. And even if some faculty slack off as publishers, so what? As long as they’re good teachers, mentors, and colleagues, it’s not necessary that all college faculty be active publishers their whole careers.
Tenure offers a beneficial set of incentives for many universities. Faculty want the protections we have outlined above, and universities want to encourage faculty to develop university-specific human capital to better serve their educational vision and the type of students they attract. Faculty don’t necessarily want to make those specific investments if the opportunity cost may be enhancing their publication record so as to make them more attractive in the job market.
Tenure is a commitment by the institution to maintain a faculty member’s employment in return for abiding by some basic rules and demonstrating during the tenure process that they have acquired that institution-specific human capital. The faculty member gets enhanced, but not total, job security, and the institution gets someone committed to its particular needs. In this way, tenure is like marriage: we bind ourselves to an arrangement with high exit costs in order to incentivize ourselves to commit to the relationship. Just as marriage is compatible with a free society, so is tenure.
There are many problems with contemporary higher education, but tenure isn’t one of them. Ending tenure would exacerbate many of those issues while also creating new ones. And in an institutional setting where the opponents of liberty hold most of the cards, getting rid of one of the most important institutions that protects dissent and the ability to seek the truth will only silence the friends of liberty.
Aeon J. Skoble is Professor of Philosophy at Bridgewater State University.
Mr. Stolyarov endeavors to refute the common argument that any law, be it a physical law or a law of morality or justice, requires a lawgiver – an intelligent entity that brought the law into being. While some laws (termed manmade or positive laws) do indeed have human lawmakers, a much more fundamental class of laws (termed universal or natural laws) arise not due to promulgation by any intelligent being, but rather due to the basic properties of the entities these laws concern, and the relations of those entities to one another. To the extent that positive laws are enacted by humans, the purpose of such positive laws should be reflect and effectuate the beneficial consequences of objectively valid natural laws.
– Formula for the Universal Law of Gravitation: F = G*m1*m2/r2, with F being the force between two masses, m1 and m2 being the two masses, r being the distance between the centers of the two masses, and G being the universal gravitational constant.
Here I endeavor to refute the common argument that any law, be it a physical law or a law of morality or justice, requires a lawgiver – an intelligent entity that brought the law into being. While some laws (termed manmade or positive laws) do indeed have human lawmakers, a much more fundamental class of laws (termed universal or natural laws) arise not due to promulgation by any intelligent being, but rather due to the basic properties of the entities these laws concern, and the relations of those entities to one another. To the extent that positive laws are enacted by humans, the purpose of such positive laws should be to reflect and effectuate the beneficial consequences of objectively valid natural laws. For instance, it is a natural law that each human being possesses a right to life. A positive law that prohibits and punishes murder of one human being by another would reflect the natural law and therefore be desirable. On the other hand, if any positive law were to mandate murder (as various edicts by tyrannical regimes throughout history, targeting political dissidents or disfavored minority groups, have done), then that positive law would be contrary to the natural law and therefore illegitimate and harmful.
The physical laws of nature pertain to all entities, including humans, and describe the regularities with which these entities will behave within applicable situations. Examples of physical laws include Newton’s Three Laws of Motion, the law of gravitation, the law of conservation of matter and energy, and the law of conservation of momentum. If it is asserted that these laws require a lawgiver, then the lawgiver would hypothetically be able to alter these laws on a whim at any time, thereby depriving them of their universality and predictable application. Such a state of affairs would not only be highly inconvenient (to say the least), but also completely incompatible with the reality that these laws are derived from the nature of entities as they are.
We can draw upon ubiquitous observation and the fact that these laws of nature can indeed be harnessed so precisely that every functional technology ever invented works because it takes advantage of them. The argument that the laws of nature could change tomorrow depends on a false perception of what those laws are – a kind of Platonic view that the laws of nature are superimposed upon the world of objects. In reality, however, objects (entities) and their qualities and relationships are all that exist at the most basic level. The laws of nature are relationships that are derived from the very properties inherent to objects themselves; they are not some higher layer of reality on top of the objects that leads the objects to behave in a certain way. That is, the laws of nature are what they are because the things whose behavior they describe are what they are.
The truth that the laws of nature are a function of the objects whose behavior they describe pertains to fundamental physical laws, such as the law of gravitation. While the law of gravitation and the equation  describing that law apply universally, the very existence of the law is dependent on the existence of entities that have mass and therefore exhibit gravitational attraction. Were there no entities or no entities with mass (incidentally, both logically impossible scenarios), then the concept of gravity would not have any relevance or applicability. Likewise, the amount of mass of particular entities and their distance of separation from one another will determine the extent of the gravitational force exerted by those entities upon one another. The gravitational force arises because the entities are as massive as they are and located where they are relative to one another; it does not arise because a supernatural lawgiver imposed it upon entities who would otherwise be completely static or random in their behavior in relation to one another.
The key parallel with the laws of morality is that, as the laws of gravitation stem from the objective properties of entities themselves (i.e., that they have mass – which is a universal property of all entities), so do the laws of morality stem from the objective properties of human beings themselves – namely, the biological and physical prerequisites of human survival and flourishing. Different specific decisions may be the appropriate moral decisions in different contexts, but because of the essential similarities of humans along many key dimensions, certain general moral truths will hold universally for all humans. But again, were there no humans (or similar rational, sentient, volitional beings) with these essential attributes, the concept of morality would have no relevance.
Neither morality nor gravitation require the existence of entities outside of those exhibiting moral behavior or gravitational attraction. A system of physical or moral laws is not dependent on an outside “lawgiver” but rather on the objective natures of the entities partaking in the system. Objective moral laws include the principles of ethics, which address how a person should behave to maximize possible well-being, as well as the principles of justice, which address how people should relate to one another in respecting one another’s spheres of legitimate action, rewarding meritorious conduct, and punishing destructive conduct against others. There is a natural harmony between adherence to objective moral laws and the attainment of beneficial consequences for one’s own life, material prosperity, and happiness – provided that one adheres to a view of long-term, enlightened, rational self-interest, which does not allow one to sacrifice the lives, liberty, or property of others to achieve a short-term gain.
Some would assert that principles of behavior that tend to maximize well-being and serve one’s rational self-interest may be part of prudent or practical conduct, but are not the same as morality. In the minds of these individuals, morality (typically, in their view, willed by an external lawgiver) is independent of practical means or consequences and often (as, for instance, in Immanuel Kant’s outlook on morality) inherently divorced from actions conducive to self-interest. I, however, strongly reject any notion that there might be a dichotomy between morality and practicality, happiness, or prosperity – when a long-term, enlightened, and multifaceted outlook on the latter conditions is considered. Some might be so short-sighted as to mistake some temporary advantage or fleeting pleasure for true fulfillment or happiness, but the objective cause-and-effect relationships within our physical reality will eventually disappoint them (if they live long enough – and if not, their punishment – death – will be even greater). If some or even many humans might be drawn toward certain pleasurable feelings for their own sake (which is an evolutionary relic of a very different primeval environment inhabited by our ancestors – but a tendency ill-adapted to our current environment), this is not the same as achieving truly sustainable prosperity and happiness by using reason to thrive in our current environment (or to create a better environment for human flourishing). One of the objectives of a good moral system is to guide people toward the latter outcome. My essay and video “Commonly Misunderstood Concepts: Happiness” offer more detailed thoughts on key elements of a life of flourishing and the concept of eudaemonia – the actualization of one’s full potential, as Aristotle and later virtue-oriented philosophers described it.
Objective moral law, derived from the fundamental value of every innocent rational, sentient being’s life, posits an essential harmony of the long-term, enlightened self-interests of all who earnestly pursue truth and goodness. Unlike many proponents of an externally legislated moral framework (for which the alleged lawgiver might be a supernatural being, a single human ruler, or a collective of humans), I would not consider self-sacrifice to be a component of morality. I align more with Ayn Rand’s view of sacrifice as a surrender of a greater value (e.g., one’s life) to a lesser value (e.g., abstractions such as nation-states, religions, or perceived slights from another nation-state or religious or cultural group). A person can behave morally – promoting his own life, respecting the rights of others, and contributing to human flourishing – without ever surrendering anything he values (except as an instrument for obtaining outcomes he might justifiably value more). Morality should therefore not be seen as the subordination of the individual to some higher ideal, be it a divine order or a manmade one. Rather, the individual is the ideal for which moral behavior is the path to fulfillment.
A person who behaves morally advances himself while fully respecting the legitimate prerogatives of others. He improves his own life without damaging anybody else’s. In the process of pursuing enlightened self-interest, he also benefits the lives of others through value-adding interactions. Indeed, he may enter into an extensive network of both formal and informal reciprocal obligations with others that result in his actions being a constant, sustainable source of improvement in others’ lives. The virtue of honesty is part of objective ethics and impels a moral individual to strive to honor all commitments once they have been made. The key to a morality based on objective, natural law, however, is that these obligations be entered into freely and not as a result of the self being compromised in favor of an alleged higher ideal. Consequently, a key component of natural law is the liberty of an individual to evaluate the world in accordance with his rational faculty and to decide which undertakings are consistent with his enlightened self-interest. When positive laws are crafted so as to interfere with that liberty, positive law becomes at odds with natural law, leading to warped incentives, institutionalized sacrifices, and painful tradeoffs that many individuals must make if they seek to abide by both natural and positive laws.
Objective natural laws – both physical and moral – do not require a lawgiver and antecede manmade, positive laws. Some natural laws, however, may require positive laws – such as prohibitions on murder, theft, and slavery – in order for the desirable outcome brought about by the natural laws to be reflected in actual (rather than simply hoped-for) human behavior. In order to improve human well-being, positive laws should be developed to advance and effectuate natural laws, instead of attempting to resist them or contravene them. Just as a law that redefines the value of pi as 3.2 (one actually unsuccessfully attempted in Indiana in 1897) is rightly seen as absurd on its face, even if a majority votes to enact it, and would result in many failed constructions if implemented by engineers and designers of machines, so would a law that abrogates the natural liberty of individuals to peacefully pursue their own flourishing result in damage to good human beings and increases in physical harm, suffering, and injustice. A good human lawmaker should respect pre-existing objective natural laws and not attempt to contradict them.
 F = G*m1*m2/r2, with F being the force between two masses, m1 and m2 being the two masses, r being the distance between the centers of the two masses, and G being the universal gravitational constant.
Imagine you are considering a candidate as a caregiver for your child. Or maybe you are vetting an applicant for a sensitive position in your company. Perhaps you’re researching a public figure for class or endorsing him in some manner. Whatever the situation, you open your browser and assess the linked information that pops up from a search. Nothing criminal or otherwise objectionable is present, so you proceed with confidence. But what if the information required for you to make a reasoned assessment had been removed by the individual himself?
The law’s purpose is to prevent people from being stigmatized for life. The effect, however, is to limit freedom of the press, freedom of speech, and access to information. Each person becomes a potential censor who can rewrite history for personal advantage.
It couldn’t happen here
The process of creating such a law in the United States is already underway. American law is increasingly driven by public opinion and polls. The IT security company Software Advice recently conducted a survey that found that “sixty-one percent of Americans believe some version of the right to be forgotten is necessary,” and “thirty-nine percent want a European-style blanket right to be forgotten, without restrictions.” And politicians love to give voters what they want.
What form would the laws likely take? In the Stanford Law Review(February 13, 2012), legal commentator Jeffrey Rosen presented three categories of information that would be vulnerable if the EU rules became a model. First, material posted could be “unlinked” at the poster’s request. Second, material copied by another site could “almost certainly” be unlinked at the original poster’s request unless its retention was deemed “necessary” to “the right of freedom of expression.” Rosen explained, “Essentially, this puts the burden on” the publisher to prove that the link “is a legitimate journalistic (or literary or artistic) exercise.” Third, the commentary of one individual about another, whether truthful or not, could be vulnerable. Rosen observed that the EU includes “takedown requests for truthful information posted by others.… I can demand takedown and the burden, once again, is on the third party to prove that it falls within the exception for journalistic, artistic, or literary exception.”
America protects the freedoms of speech and the press more vigorously than Europe does. Even California’s limited version of a “right to be forgotten” bill has elicited sharp criticism from civil libertarians and tech-freedom advocates. The IT site TechCrunch expressed the main practical objection: “The web is chaotic, viral, and interconnected. Either the law is completely toothless, or it sets in motion a very scary anti-information snowball.” TechCrunch also expressed the main political objection: The bill “appears to create a head-on collision between privacy law and the First Amendment.”
Conflict between untrue information and free speech need not occur. Peter Fleischer, Google’s global privacy counsel, explained, “Traditional law has mechanisms, like defamation and libel law, to allow a person to seek redress against someone who publishes untrue information about him.… The legal standards are long-standing and fairly clear.” Defamation and libel are controversial issues within the libertarian community, but the point here is that defense against untrue information already exists.
What of true information? Truth is a defense against being charged with defamation or libel. America tends to value freedom of expression above privacy rights. It is no coincidence that the First Amendment is first among the rights protected by the Constitution. And any “right” to delete the truth from the public sphere runs counter to the American tradition of an open public square where information is debated and weighed.
Moreover, even true information can have powerful privacy protection. For example, the Fourth Amendment prohibits the use of data that is collected via unwarranted search and seizure. The Fourteenth Amendment is deemed by the Supreme Court to offer a general protection to family information. And then there are the “protections” of patents, trade secrets, copyrighted literature, and a wide range of products that originate in the mind. Intellectual property is controversial, too. But again, the point here is that defenses already exist.
Reputation capital consists of the good or bad opinions that a community holds of an individual over time. It is not always accurate, but it is what people think. The opinion is often based on past behaviors, which are sometimes viewed as an indicator of future behavior. In business endeavors, reputation capital is so valuable that aspiring employees will work for free as interns in order to accrue experience and recommendations. Businesses will take a loss to replace an item or to otherwise credit a customer in order to establish a name for fairness. Reputation is thus a path to being hired and to attracting more business. It is a nonfinancial reward for establishing the reliability and good character upon which financial remuneration often rests.
Conversely, if an employee’s bad acts are publicized, then a red flag goes up for future employers who might consider his application. If a company defrauds customers, community gossip could drive it out of business. In the case of negative reputation capital, the person or business who considers dealing with the “reputation deficient” individual is the one who benefits by realizing a risk is involved. Services, such as eBay, often build this benefit into their structure by having buyers or sellers rate individuals. By one estimate, a 1 percent negative rating can reduce the price of an eBay good by 4 percent. This system establishes a strong incentive to build positive reputation capital.
Reputation capital is particularly important because it is one of the key answers to the question, “Without government interference, how do you ensure the quality of goods and services?” In a highly competitive marketplace, reputation becomes a path to success or to failure.
Right-to-be-forgotten laws offer a second chance to an individual who has made a mistake. This is a humane option that many people may choose to extend, especially if the individual will work for less money or offer some other advantage in order to win back his reputation capital. But the association should be a choice. The humane nature of a second chance should not overwhelm the need of others for public information to assess the risks involved in dealing with someone. Indeed, this risk assessment provides the very basis of the burgeoning sharing economy.
History and culture arememory
In “The Right to Be Forgotten: An Insult to Latin American History,” Eduardo Bertoni offers a potent argument. He writes that the law’s “name itself“ is “an affront to Latin America; rather than promoting this type of erasure, we have spent the past few decades in search of the truth regarding what occurred during the dark years of the military dictatorships.” History is little more than preserved memory. Arguably, culture itself lives or dies depending on what is remembered and shared.
And yet, because the right to be forgotten has the politically seductive ring of fairness, it is becoming a popular view. Fleischer called privacy “the new black in censorship fashion.” And it may be increasingly on display in America.
Note from the Author: This essay was originally published as part of Issue CLXXXVI of The Rational Argumentator on February 4, 2009, using the Yahoo! Voices publishing platform. Because of the imminent closure of Yahoo! Voices, the essay is now being made directly available on The Rational Argumentator.
~ G. Stolyarov II, July 22, 2014
Many advocates of free markets, reason, and liberty are content to just sit back and let things take their course, thinking that the right ideas will win out, by virtue of being true and therefore in accord with the objective reality. Sooner or later, these people think, the contradictions entailed in false ideas – contradictions obvious to the free-market advocates – will become obvious to everybody. Moreover, false ideas will result in bad consequences that people will rebel against and begin to apply true ideas. While this view is tempting – and I wish it reflected reality – I am afraid that it misrepresents the course that policies and intellectual trends take, as well as the motivations of most human beings.
Why does the truth not always – indeed, virtually never, up until the very recent past – win out in human societies among the majority of people? Indeed, why can one confidently say that most people are wrong about most intellectual matters and matters of policy most of the time? A few reasons will be explored here.
First, the vast majority of people are short-sighted and unaware of secondary effects of their actions. For instance, they see the direct effects of government redistribution of wealth – especially if they are on the receiving end – as positive. They get nice stuff, after all. But the indirect secondary effects – the reduced incentives of the expropriated to produce additional wealth – are not nearly so evident. They require active contemplation, which most people are too busy to engage in at that sophisticated a level.
The second reason why truth rarely wins in human societies – at least in the short-to-intermediate term – is that people’s lifespans are (thus far in our history) finite. While many people do learn from their experiences and from abstract theory and recognize more of the truth as they get older, those people also tend to die at alarming rates and be replaced by newer generations that more often than not make the same mistakes and commit the same fallacies. The prevalence of age-old superstitions – including beliefs in ghosts, faith healing, and socialism – can be explained by the fact that the same tempting fallacies tend to afflict most unprepared minds, and it takes a great deal of time and intellectual training for most people to extricate themselves from them – unless they happened to have particularly enlightened and devoted parents. If all people lived forever, one could expect them to learn from their mistakes and fallacies eventually and for the prevalence of those errors to asymptotically approach zero over time.
The third reason for the difficulty true ideas have in winning is the information problem. No one person has access to all or even a remote fraction of the truth, and certainly no one person can claim to be in possession of all the true ideas required to prevent or even optimally minimize all human folly, aggression, and self-destruction. Moreover, just because a true idea exists somewhere and someone knows it does not mean that many people will be actively seeking it out. Improving information dispersal through such technologies as the Internet certainly helps inform many more people than would have been informed otherwise, but this still requires a fundamental willingness to seek out truth on the part of people. Some have this willingness; others could not care less.
The fourth reason why the truth rarely wins out is that the proponents of false ideas are often persistent, clever, and well organized. They promote their ideas – which they may well believe to be the truth – just as assiduously, if not more so, than the proponents of truth promote their ideas. In fact, how true an idea is might matter when it comes to the long-term viability of the culture and society whose participants adopt it; but it matters little with regard to how persuasive people find the idea. After all, if truth were all that persuaded people, then bizarre beer ads that imply that by drinking beer one will have fancy cars and lots of beautiful women would not persuade anyone. The persistence of advertising that focuses on anything but the actual merits and qualities of the goods and services advertised shows that truth and persuasiveness are two entirely different qualities.
The fifth reason why the truth has a difficult time winning over public opinion is rather unfortunate and may be remedied in time. But many people are, to be polite, intellectually not prepared to understand it. Free-market economics and politics are not easy subjects for everybody to grasp. If a significant fraction of the population in economically advanced countries has trouble remembering basic historical facts or doing basic algebra, how hard must economic and political theory be for such people! I do not believe that any person is incapable of learning these ideas, or any ideas at all. But to teach them takes time that they personally are often unwilling to devote to the task. As economic and technological growth renders more leisure time available to more people, this might change, but for the time being the un-intellectual state of the majority of people is a tremendous obstacle to the spread of true ideas.
It is bad enough that many people are un-intellectual and thus unable to grasp true ideas without a great deal of effort they do not wish to expend. That problem can be remedied with enough material and cultural progress. The greater problem, and the sixth reason why the truth has difficulty taking hold, is that a sizable fraction of the population is also anti-intellectual. They not only cannot or try not to think and learn; they actively despise those who do. Anti-intellectualism is a product of pure envy and malice, much like bullying in the public schools. It led to the genocides of Nazi Germany, the Soviet Union under Stalin, Communist China under Mao, and Communist Cambodia under the Khmer Rouge. In Western schools today, it leads to many of the best and brightest students – who know more of the truth than virtually anyone else – being relentlessly teased, mocked, suppressed, ostracized, and even physically attacked by their jealous and lazy peers as well as by some egalitarian-minded teachers.
But enough about why most people are unreceptive to true ideas. Even those who are receptive have substantial problems that need to be overcome – and most often are not overcome – in order for the truth to win. The seventh reason why the truth rarely wins is that most of the people who do understand it are content to merely contemplate it instead of actively promoting it. They might think that they are powerless to affect the actual course of affairs, and their sole recourse is simply the satisfaction of knowing that they are right while the world keeps senselessly punishing itself – or the satisfaction that at least they are not an active or enthusiastic part of “the system” that leads to bad outcomes. This, I regret to say, is not enough. Knowing that one is right without doing anything about it leads to the field of ideas and actions being wholly open to and dominated by the people who are wrong and whose ideas have dangerous consequences.
Everyone who knows even a shred of the truth wants to be a theorist and expound grand systems about what is or is not right. I know that I certainly do. I also know that theoretical work and continual refinement of theories are essential to any thriving movement for cultural and intellectual change. But while theory is necessary, it is not sufficient. Someone needs to do the often monotonous, often frustrating, often exhausting grunt work of implementing the theories in whatever manner his or her abilities and societal position allow. The free-market movement needs government officials who are willing to engage in pro-liberty reforms. But it also needs ordinary citizens who are willing to write, speak, and attempt to reach out to other people in innovative ways that might just be effective at persuading someone. To promote the truth effectively, a tremendously high premium needs to put on the people who actually apply the true ideas, as opposed to simply contemplating them.
Read other articles in The Rational Argumentator’s Issue CLXXXVI.
I have a pretty positive view of human nature, for a number of reasons. Partly, I’m consciously correcting for the negative bias of “if it bleeds, it leads” journalism. I also reject the idea that being self-interested is necessarily anti-social. Pursuing your own happiness, rightly understood, is a good thing. In fact, I would argue that those who care little for loftier goals like the good of society often do more good than do-gooders, as long as they pursue their self-interest rationally.
But caring about others is also a good thing, of course. And nobody cares more than those activists, pundits, political leaders, and enthusiastic voters who are involved in fighting to bring about a better society, right? Well, not according to philosopher Michael Huemer. In a very readable and thought-provoking paper entitled “In Praise of Passivity,” Huemer suggests that most people who see themselves as motivated by some high political ideal are instead motivated “by a desire to perceive themselves as working for the noble ideal.”How can you tell who really cares?
Huemer writes, “If people are seeking high ideals such as justice or the good of society, then they will work hard at figuring out what in fact promotes those ideals and will seek out information to correct any errors in their assumptions about what promotes their ideals, since mistaken beliefs on this score could lead to all of their efforts being wasted.”
This requires, among other things, reading up on more than one side of a controversial issue. Huemer doesn’t think most people with strong political opinions do these kinds of things. Rather, according to his observations, “most people who expend a great deal of effort promoting political causes expend very little effort attempting to make sure their beliefs are correct. They tend to hold very strong beliefs that they are very reluctant to reconsider.”This tendency certainly counts against my positive view of human nature.
Given how difficult it is to acquire real knowledge in the social sciences—something Huemer explores in his paper—people who merely want to perceive themselves as working for high political ideals are very likely to do more harm than good. But all is not lost. For one thing, I ascribe no ill will to people who want to feel good about themselves. And fortunately, there are workarounds for our all-too-human cognitive biases. Huemer has several recommendations for how to do some real good (and avoid doing real harm) in the world, recommendations that will surely challenge many people’s assumptions — which is itself a good thing.