The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.
U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.
Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.
Materialism has become a rather dirty word, principally through its connection to consumerism. Indeed materialism seems to have become so thoroughly conflated with consumerism as to be wholly indistinguishable. For example, in the study, Changes In Materialism, Changes In Psychological Well-Being: Evidence For Three Longitudinal Studies & An Intervention Experiment, the authors write: “Studies 1, 2, and 3 examined how changes in materialistic aspirations related to changes in well-being, using varying time frames (12 years, 2 years, and 6 months), samples (US young adults and Icelandic adults), and measures of materialism and well-being.”
It would be mistaken to conflate a philosophy of materialism, with mere consumerism as behavioral practice. I am not here suggesting that this is what the authors of the document have done (indeed, it appears as if they are simply using ‘materialism’ as a placeholder for ‘material object; principally, those objects manufactured and distributed in modern western society’), however, at first glance, it is difficult to tell and this is the crux of the problem. When one word is conflated with another, after a sufficient period of usage the two become implicitly associated, regardless of whether they are actually interlaced in any meaningful way. Thus, when one deploys the term ‘consumerism’ one instantly thinks of ‘materialism’ and vice-versa. This, I shall argue, is wholly mistaken; however, before proceeding, let us define our terms.
Consumerism is a term which rose to prominence in the 20th Century with the advent of mass production and denotes a social order wherein goods are purchased and used (‘consumed’) in ever increasing quantities. It has a few other more technical definitions, however, this is generally the explicit meaning of the term when it is negatively deployed (and it is almost always negatively deployed, at least, as of this writing, though positive variations of the term were used, such as by J. S. Bugas who deployed the word to refer to consumer sovereignty). In this negative characterization, consumerism is keeping-up-with-the-Jones or Patrick Batemanism — normative behaviors which privilege non-noetic objects over noetic ones with the exception of the referent consumer (the individual who is consuming the non-noetic objects, who naturally does so, not because they care solely about the objects themselves, but because they gain something from the consumption of those objects).
Materialism, broadly, briskly and vulgarly speaking, is a philosophical position generally characterized by substance monism, which holds that because everything which has been observed is energy and matter, it is rational to conclude everything that exists is (or is likely to be) composed of energy and matter (the same inductive reasoning is at work in expanding the theory of gravity to all places in the universe, even those wholly unobserved). As a school of thought, it has gone through numerous incarnations ranging from Democritus the atomist, to the cosmic mechanists prior to Newton, to the scientistic physicalists of the modern age (such as Hawking, Krauss and Dawkins).
More rigorous, sophisticated and logically defensible forms of ontological naturalism (sometimes referred to as ‘realism’ in contradistinction to ‘idealism’) which have been referred to as various materialisms can be found in the work of such philosophers as Wilfrid Sellars, John McDowell and Jeremy Randel Koons, and the neuroscientist, Paul M. Churchland.
Regardless of whether or not one agrees with the ontological assertions or arguments of any variation of materialism – atomist, mechanist, Sellarsian or eliminativist – it should be clearly noted that consumerism is a descriptive set of social practices, not a holistic formal ontology. One may be a Buddhist, Christian, Muslim or Daoist and still be a consumerist. Indeed, the vast majority of those who have ever lived western consumerist lifestyles within modern society have been Christians (principally Catholics and Protestants), not scientistic materialists (as is sometimes alleged); this is demonstrable simply by reference to religio-demographic composition, as most consumer societies were, from their inception, constituted by Christians who are, obviously, non-materialists (philosophically speaking). Of course, it is perfectly possible to be a stalwart materialist (in the philosophical sense) and still be a consumerist, but it is not intrinsic to the position.
Drawing a clear distinction between materialism and consumerism is important given that because consumerism has become so thoroughly disdained, referent to it likewise besmirches any materialist ontology through negative moral assignation, RATHER than through rigorous logical refutation, thus engendering an impairment, not only of the thorough-going materialist diagrams, but also of critical, logical thought itself.
Kaiter Enless is the administrator and principal author of the Logos website and literary organization.
Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II
How to Create a Mind(2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.
Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.
With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.
The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.
One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:
Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.
So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.
So we don’t need your old body and brain anymore, right? Okay if we dispose of it?
You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.
Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you. (How to Create a Mind, pp. 243-244)
Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.
Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:
But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)
Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist. Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can I Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.
How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.
Everyone intuits an emotional substance to music, yet few can explain its nature and origins. According to some, it is merely subjective; a piece evokes feelings that are personal to the listener but have no basis in the actual structure, melody, and harmonies of the composition itself. According to others, emotion in music can only be explained if anchored to a particular story or the historical context of the composer’s life and motivations. Still others disdain talk of musical emotion altogether and prefer a pure formalism, sometimes seeking to explain why music that feels jarring, discordant, or no way in particular can still be great because of some convention-flouting thing it does. M. Zachary Johnson, a teacher and historian of music and himself an accomplished composer, differs from all of those commonplace views and, in Emotion in Life and Music: A New Science, sets forth a framework by which the mathematics inherent in musical relationships and the feelings to which music gives rise are not only reconciled but shown to be inextricably linked, providing “connection of the emotion with the exact mathematical ratios which measure pitch distance and explain our qualitative affective experience” (p. 163).
Johnson’s concept of the psychological signature of a piece based on three measurable dimensions of intensity, speed, and affect provides a rubric for discerning which basic emotions a musical passage will elicit in the listener. As Johnson points out, these are generalized emotions such as pride or anguish, not anchored to a particular context (e.g., the feeling of accomplishment at having run a marathon or the feeling of having been betrayed) – although other media, such as the storyline of an opera, and even the listener’s personal experiences can provide such a context, which is indeed why different listeners may have different subjective associations with a piece of a particular, objective psychological signature. Even though musical tastes do vary widely among individuals, Johnson convincingly articulates that these tastes are still in reference to something in particular and that an individual’s response to the objective psychological signature of a piece tells more about the listener than about the piece itself. This is a welcome, refreshing contrast to the often militantly intolerant subjectivism of those who proclaim that there are no distinctions of quality or even nature to music or even art in general – that it is all up to the arbitrary preferences of the composer and/or listener, and that anyone who dares challenge this dogma deserves condemnation in the most strident terms. Perhaps contemporary Western culture, or at least the occasional oasis of rationality within it, is beginning to turn away from such absurdity, and Johnson contributes theoretical support to the view most articulately (in our era) espoused by Alma Deutscher that music should be beautiful.
Why is it desirable for dissonance to be resolved? Johnson explains that “Feelings such as pleasure, joy, serenity, inner harmony and balance – these are settled, complete states of mind. They are self-sufficient rewards, forms of satisfaction and contentment. They are ends in themselves. Feelings such as pain, suffering, fear, anger, restlessness, emotional distress and chaos – these are unsettled, incomplete, resolution-demanding states of mind. They motivate us to take some form of productive or corrective action. In respect to psychology, these are a propulsion to satisfy a need, to resolve a clash, to soothe oneself and heal, to strengthen, to gain adaptive flexibility, to stabilize the psyche and bring order to it” (p. 65). Wholesome, constructive music does not merely exist for its own sake but can greatly assist individuals in this task of achieving emotional integrity and strength. This includes music which expresses the darker or incomplete emotions, as long as this expression offers the listener an effective laboratory of the mind to work through such emotions without the risks and harms that would give rise to them in one’s personal life. Johnson notes that “Music rewards you for successful cognitive action, not for successful existential action. And when it gives you darker emotions, the function is not to indicate loss and failure, but to provide a means of sensually enjoying and studying and contemplating the states of consciousness, independently of the issue of actual material loss or gain – which is a form of self-knowledge, an affirmation of the value of one’s own faculties, and therefore itself a spiritual gain” (p. 110). However, there is a difference between a healthy, structured, rational exploration of the darker emotions with the intent of achieving resolution and completeness and the self-destructive embrace of those emotions, which certain types of “music” attempt to inculcate.
I consider myself to be within the same broad Apollonian musical and esthetic tradition as Johnson – as contrasted with the Dionysian revelry in the shocking, debased, and unrestrained. Yet perhaps my most significant difference with Johnson is the scope of what I would encompass within the Apollonian milieu and the latitude which I would allow to certain composers whom Johnson portrays rather harshly. Yes, Richard Wagner had his long, moody, meandering passages – but when his music becomes focused, determined, and structured, it is truly majestic. Yes, Dmitri Shostakovich was often despondent, but he could also write a fugue without any dissonance – and, besides, who would not be despondent when responding to the atrocities of the Stalin regime, but needing to do so in a veiled, indirect manner to create plausible deniability? (Shostakovich, too, had his heroic moments, as in the ending to his Seventh Symphony, which is about as optimistic as one can reasonably be in the midst of the devastation of World War II.) Nor would I agree with Johnson’s portrayal of the Second Movement of Wolfgang Mozart’s Piano Concerto No. 22 in E-Flat as conveying a message of hopelessness or futility; I would rather characterize it as expressing mild, reflective melancholy. As for Arnold Schoenberg – well, Schoenberg deserves all of the criticism that Johnson has in store for him; he had no excuses for the misguided rebellion against tonality.
Yet, more generally, it is perhaps a misdirection of effort to focus on criticism of singular figures in musical or intellectual history. The massive departure of “high” music from tonality in the early 20th century certainly could not have been solely Schoenberg’s doing – nor could the intellectual seeds for this trend have been planted a century and a half in advance by Immanuel Kant (whom Johnson characterizes, following many similar assertions by Ayn Rand, as the mastermind of the end of the Enlightenment and the decline of the West). Kant had his errors, to be sure (though Rand always somehow overlooked the redeeming aspects of his immense humanism and political classical liberalism, especially in the context of his time), and Schoenberg’s music is simply not pleasant to the ear – but one could have a civilized and interesting conversation with either Kant or Schoenberg over a cup of coffee. No – the rebellion against the Enlightenment was more the doing of the rabble who cheered when the guillotine fell during the Reign of Terror. The widespread descent of music into atonality could not have occurred were it not for the slaughter of World War I, a crime of millions against millions – and against themselves. Johnson’s criticism of rock music (perhaps itself a bit harsh – but I offer my evaluation as one who has only heard the music separately from its typical “scene”) is better leveled at the ordinary revelers at Woodstock and Altamont – not the music itself (which is rather harmonious and innocuous compared to what commonly passes for popular “music” today). The tendency toward dissipation and destruction is not orchestrated by a handful of avatars of particular movements – but, rather, it lurks within the masses of people because of regrettable cognitive biases and irrational emotional urges that are the unfortunate inheritance of humankind’s deeply flawed evolutionary origins. In certain eras these destructive inclinations are subdued due to general prosperity and the proper incentives within social, political, and technological systems – whereas in other eras, arguably including our own (though not always or everywhere), they are encouraged by widespread norms of (mis)conduct, cultural portrayals, and everyday attitudes to become acted out by masses of people to great personal and societal toll. This is, in many regards, an ancient and recurring problem, sometimes taking on bizarre manifestations such as the pathological dance epidemic of 1518.
Accordingly, it is more important to advocate the Apollonian mindset in general in opposition to the Dionysian proclivities in general than to seek to single out particular instances of the latter. As long as humans continue to contend with our flawed evolutionary inheritance – which may not and should not always be our lot – and as long as some humans also retain aspects of nobility of character and aspiration for a better life, there will always be some exemplars of both the Apollonian and the Dionysian to point to. A more salient question, though, is, “Which of these paradigms is proportionally predominant?” Furthermore, how can the proportions among cultural creations be shifted in favor of the Apollonian? The more immediate problem we contend with is that there are vast quantities of people who would understand nothing in Johnson’s book and would have no knowledge of anything he praises or criticizes; they would be equally ignorant of Mozart and Beethoven, Aristotle and Kant, Schoenberg and Shostakovich, Brahms and Ayn Rand, and yet they would hate everything about any mention of them (in whatever light) – or about my review of Johnson’s book, or about a review from a critic with views diametrically opposite mine. The problem of anti-intellectualism in contemporary Western societies (particularly the United States) runs that deep, and it is evident that Johnson is gravely concerned about this predicament.
But perhaps good music can offer us a path toward a brighter future. If anti-intellectualism is the predominant cultural malaise of our time, then the inoculation against it may be found in Johnson’s articulation of the purpose of the best music as expressing the love of intelligence: “The essence of our humanity, the linchpin integrating reason and emotion, the special theme of the good life, the hallmark of virtue, the root of justice, the core of idealism and aspiration and heroism, the fundamental guardian of political freedom, and the root of all human love, is the love of man’s intelligence. […] The essence of music is precisely the love of human intelligence. Music, as nature’s reward for cognitive fitness, is the greatest medium in existence for expressing that theme” (pp. 179-180). Could exposure to great music – simple exposure, without even the theoretical explication which is accessible only at a much higher level of erudition – instill a love of intelligence in sufficiently larger numbers of people so as to turn the cultural tide? This is at least worth including as a tactic in the great, ongoing endeavor of civilizing the human mind and ensuring that the nobility of sentiment can grow to keep pace with material and technological advances.
From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.
Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.
Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”
What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.
It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.
Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?
DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?
Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”
Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.
During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one. Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.
Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!). There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.
Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.
Gennady Stolyarov II Interviewed by Nikola Danaylov of Singularity.FM
On March 31, 2018, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, was interviewed by Nikola Danaylov, a.k.a. Socrates, of Singularity.FM. A synopsis, audio download, and embedded video of the interview can be found on Singularity.FM here. You can also watch the YouTube video recording of the interview here.
Apparently this interview, nearly three hours in length, broke the record for the length of Nikola Danaylov’s in-depth, wide-ranging conversations on philosophy, politics, and the future. The interview covered both some of Mr. Stolyarov’s personal work and ideas, such as the illustrated children’s book Death is Wrong, as well as the efforts and aspirations of the U.S. Transhumanist Party. The conversation also delved into such subjects as the definition of transhumanism, intelligence and morality, the technological Singularity or Singularities, health and fitness, and even cats. Everyone will find something of interest in this wide-ranging discussion.
Visit the U.S. Transhumanist Party website at http://transhumanist-party.org. To help advance the goals of the U.S. Transhumanist Party, as described in Mr. Stolyarov’s comments during the interview, become a member for free, no matter where you reside. Click here to fill out a membership application.
How Marcus Aurelius Influenced Adam Smith (No, Really) – Article by Paul Meany
Adam Smith’s appreciation for the Stoic emperor’s writings is evident in his own work.
Who Was Marcus Aurelius?
Marcus Aurelius Antoninus Augustus was the last of the five good emperors of Rome. He was born in 121 AD, reluctantly became emperor in 161 AD, and reigned for 19 years until his death in 180 AD. His reign was punctuated by numerous wars during which he repelled Rome’s enemies in long campaigns. When not at the frontiers of the empire, he spent his time administering the law, focusing his attention particularly on the guardianship of orphans, the manumission of slaves, and choosing city councilors.
Lord Acton memorably stated, “Power corrupts, absolute power corrupts, absolutely.” Lord Acton’s aphorism is, for the most part, true, but there was one exception to it in history: Marcus Aurelius. He famously had a keen interest in philosophy. Perpetually practicing self-control and moderation in all aspects of his life, he was the closest any person ever came to embodying Plato’s ideal of the “philosopher king.”
While on the front lines of his campaign against the German tribes, Marcus Aurelius wrote his own personal diary. This was originally titled Ta Eis Heauton, meaning To Himself in Greek. Subsequent translations of the text changed the title numerous times; we now know it as Meditations. In Meditations, Marcus Aurelius writes his personal views on the Stoic philosophy.
He focuses heavily on the themes of finding one’s place in the cosmic balance of the universe, the importance of analyzing your actions, and being a good person. Asserting that one should be judged first and foremost on their actions, he decisively urged us to “waste no more time arguing about what a good man should be. Be one.” Meditations is a masterpiece of Stoic philosophy, brimming with insightful, emotional and, most importantly, useful observations on morality and the human condition.
Who Was Adam Smith?
Adam Smith was a Scottish moral philosopher who is renowned as one of the first modern economists. He was born in 1723 in Kirkcaldy and died in 1790. He is famous for his two seminal works, The Wealth of Nations and The Theory of Moral Sentiments. His work was massively influential on classical liberal thought as he was one of the first defenders of the free market.
In The Wealth of Nations and The Theory of Moral Sentiments, Smith articulated a persuasive case for the efficacy and morality of a free-market commercial society. Ludwig Von Mises, speaking about Smith’s works, wrote that they “presented the essence of the ideology of freedom, individualism, and prosperity, with admirable clarity and in an impeccable literary form.” Classical liberal economist Milton Friedman often wore a tie bearing a portrait of Adam Smith to formal events.
Adam Smith’s Readings of Marcus Aurelius
These two figures lived in vastly different times, under vastly different circumstances, so how did Marcus Aurelius ever influence Adam Smith? The answer lies in the ancient philosophy of Stoicism.
Stoicism was one of the three major schools of Greek philosophy in the ancient world. It was founded in Athens in the 3rd century BC by a man named Zeno of Citium. The name “Stoic” was given to the followers of Zeno, who used to congregate to hear him teach at the Athenian Agora, under the colonnade known as the Stoa Poikile. Over time, Stoicism expanded and developed sophisticated views on metaphysics, epistemology, and ethics.
While Stoicism posits numerous views on a huge variety of topics, its most interesting and relevant observations are on ethics. The Stoics were concerned with perfecting self-control which allowed for virtuous behavior. They believed that, through self-control, one could be free of negative emotions and passions which blinded objective judgment.
With a peaceful mind, the Stoics thought, people could live according to the universal reason of the world and practice a virtuous life. Marcus Aurelius described the ideal Stoic life in book three of Meditations, writing, “peace of mind in the evident conformity of your actions to the laws of reason, and peace of mind under the visitations of a destiny you cannot control.”
Adam Smith was educated at the University of Glasgow where he studied under Francis Hutcheson. Hutcheson was a Scottish intellectual and a leading representative of the Christian Stoicism movement during the Scottish Enlightenment. He hosted private noontime classes on Stoicism which Adam Smith often attended. Smith’s preference for Marcus Aurelius was encouraged by Hutcheson, who published his own translation of Meditations.
In The Theory of Moral Sentiments, Smith referred to Marcus Aurelius as “the mild, the humane, the benevolent Antoninus,” demonstrating his deep admiration for the Stoic emperor. Marcus Aurelius influenced Adam Smith in three main areas: the idea of an inner conscience; the importance of self-control; and in his famous analogy of the “Invisible Hand.”
Our Inner Conscience
Both Marcus Aurelius and Adam Smith believed that the key to understanding morality was through self-scrutiny and sympathy for others.
Marcus Aurelius wrote Meditations in the form of a self-reflective dialogue with his inner self. He thought that moral conviction lay within “the very god that is seated in you, bringing your impulses under its control, scrutinizing your thoughts.’’ He interchangeably referred to this inner god as the soul or the helmsman and believed that it is a voice within you that attempts to sway you from immoral doings; we now call this a conscience.
Similarly, Smith emphasized the role of people’s innermost thoughts. A key aspect of Smith’s moral philosophy in The Theory of Moral Sentiments is the impartial spectator. Smith theorized that morality could be understood through the medium of sympathy. He thought that before people acted they ought to look for the approval of an impartial spectator.
“But though man has… been rendered the immediate judge of mankind, he has been rendered so only in the first instance; and an appeal lies from his sentence to a much higher tribunal, to the tribunal of their own consciences, to that of the supposed impartial and well-informed spectator, to that of the man within the breast, the great judge and arbiter of their conduct.”
The Importance of Self-Control
The Stoics listed four “cardinal virtues” — wisdom, justice, courage, and temperance — for which they held great reverence. These were believed to be expressions and manifestations of a single indivisible virtue. Smith used slightly different names, but he endorsed the same set of virtues and the idea that they were all facets of one indivisible virtue.
Smith and Aurelius had a mutual appreciation for the virtue of self-control. They both believed in an impartial, self-scrutinizing conscience that guided morality: while Aurelius called it the God Within, Smith called it the Impartial Spectator.
Marcus Aurelius said, “You have power over your mind — not outside events. Realize this, and you will find strength.” The primacy of self-control is intrinsic to the Stoic philosophy. In a similar vein of thought, Smith writes that “self-command is not only itself a great virtue, but from all other virtues seem to derive their principal lustre.” This respect for self-control was encouraged and cultivated by Smith’s Impartial Spectator and Marcus Aurelius’ Inner God.
The Invisible Hand
Marcus Aurelius argues that we must work together in common cooperation in order to improve humanity as a whole. He argues that we “were born to work together.” Aurelius stressed the vital nature of human cooperation.
“Constantly think of the universe as one living creature, embracing one being and soul; how all is absorbed into the one consciousness of this living creature; how it compasses all things with a single purpose, and how all things work together to cause all that comes to pass, and their wonderful web and texture.”
In The Wealth of Nations and The Theory of Moral Sentiments, Adam Smith’s defense of the free market is expressed through the analogy of the Invisible Hand. Smith argues that in a society of free exchange and free markets, people must sympathize with one another and understand how best to benefit their fellow man in order to better their own situation.
“It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.”
The transaction will not occur unless the parties involved demonstrate their sympathy for the interests of others. In the analogy of the Invisible Hand, Smith argues that we must think of others before ourselves and consider how best to serve our fellow neighbor. This famous passage bears a striking resemblance to the previous passage by Marcus Aurelius who also argues for the importance of conscious cooperation among people for the common good.
We Are All Standing on the Shoulders of Giants
A Roman emperor seems like an unlikely intellectual influence for a classical liberal thinker such as Adam Smith. Upon closer inspection, however, Smith and Aurelius are like two peas in a pod: both men believed that the root of morality lies within the self-scrutiny of one’s conscience; both believe in the primacy of the virtue of self-control; and both believe in the importance of sympathy as a tool for cooperation and the betterment of civilized society.
No thinker is entirely alone in their pursuit of truth. All people discover truth by building upon the previous discoveries of others. This explains how an emperor came to influence so strongly an Enlightenment moral philosopher and economist more than a thousand years after he had passed away. I believe that the best expression of the development of such ideas was written by a medieval philosopher and bishop, John of Salisbury, who spoke of the wisdom of Bernard of Chartres:
“He pointed out that we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.”
We are all dwarfs standing on the shoulders of giants in the pursuit of the system of natural liberty and prosperity that Adam Smith sought during his lifetime.
Paul Meany is a student at Trinity College Dublin studying Ancient and Medieval History and Culture.
David Kelley is retiring from The Atlas Society, which he founded in 1990 under the name of The Institute for Objectivist Studies. But I can’t imagine David with an “emeritus” moniker retiring from the world of ideas that he has helped to shape.
I was at the founding event in New York City that February nearly three decades ago. I spoke at Summer Seminars and attended one-day New York events in the 1990s, and I had the privilege of working for many years with David at the Atlas Society. Knowing The Atlas Society and David as I do, I offer my own picks for his three greatest intellectual hits.
First, in The Contested Legacy of Ayn Rand, he explained that Objectivism is an open philosophy—indeed, that to be “open” is what separates a philosophy from a dogma. Objectivism originated with Ayn Rand, is defined by certain principles, but has its own logic and implications that might even be at odds with some of Rand’s own thoughts. The philosophy is open to revision and new discoveries. One implication of David’s understanding—and of the virtue of independence—is that individuals must come to the truth through their own minds and their own paths. David, therefore, rejected the practice of too many Objectivists of labeling those who disagreed with some or much of the philosophy as “evil.” In many cases they are simply mistaken. He rejected the practice of refusing even to speak with individuals who called themselves “libertarians,” arguing that the only way to change someone’s mind is to address that mind. David saved Objectivism from becoming a marginalized cult.
Second, David advanced Objectivism by showing that “benevolence” is one of the cardinal virtues of the philosophy. He argued that the logic of the ideas that constitute the philosophy leads to the conclusion that it should take its place among other virtues like rationality, productivity, pride, integrity, honesty, independence, and justice. His book Unrugged Individualism: The Selfish Basis of Benevolence is an intellectual gem that has yet to be fully mined for the value it can offer to those who want to create a world as it can be and should be, a world in which humans can flourish.
Third, David identified three world views in conflict in today’s culture. The Enlightenment ushered in modernity, which values reason, with its products of science and technology; individuals, with their rights to pursue their own happiness; liberty, with governments limited to its protection; and dynamic free markets, with their opportunities for all to prosper. Opposing modernity, David sees premodernists, who emphasize the values of faith, tradition, social stability, and hierarchy. He also sees postmodernists, whom he describes as “vociferous foes of reason, attempting to undermine and expunge the very concepts of truth, objectivity, logic, and fact.” They see these and all values as “social constructs”—all except their own left-wing dogmas and their desire to use force to bend all to their soul-destroying whims. Those wanting to understand the values battle in our culture in order to win it for civilization must have David’s essay “The Party of Modernity” in their hands and its ideas in their minds.
David Kelley created The Atlas Society to further develop and promote Objectivism, the philosophy he loves. As he steps back from the day-to-day responsibilities of his position, I know he’ll devote more time to pursuing the ideas that give him so much joy and the rest of us so much enlightenment.
Dr. Edward Hudgins is the research director for The Heartland Institute. He can be contacted here.
In conjunction with other department directors, Hudgins sets the organization’s research agenda and priorities; works with in-house and outside scholars to produce policy studies, policy briefs, and books; contributes his own research; and works with Heartland staff to promote Heartland’s work.
Before joining Heartland, Hudgins was the director of advocacy and a senior scholar at The Atlas Society, which promotes the philosophy of reason, freedom, and individualism developed by Ayn Rand in works like Atlas Shrugged. His latest Atlas Society book was The Republican Party’s Civil War: Will Freedom Win?
While at The Atlas Society, Hudgins developed a “Human Achievement” project to promote the synergy between the values and optimism of entrepreneurial achievers working on exponential technologies and the values of friends of freedom.
Prior to this, Hudgins was the director of regulatory studies and editor of Regulation magazine at the Cato Institute. There, he produced two books on Postal Service privatization, a book titled Freedom to Trade: Refuting the New Protectionism, and a book titled Space: The Free-Market Frontier.
The Power of Making Friends with Ideological Enemies – Article by Sean Malone
Daryl Davis can be a model for how to change people’s minds.
“How can people hate me, when they don’t even know me?”
This is the question that drives the subject of a fantastic new documentary on Netflix called “Accidental Courtesy: Daryl Davis, Race, and America,” directed by Matt Ornstein.
For the past 30 years, soul musician Daryl Davis has been traveling the country in search of an answer in the most dangerous way possible for a black man in America: by directly engaging with members the Ku Klux Klan.
He’s invited KKK members into his home, he’s had countless conversations, and as unlikely as it seems, now considers a number of them to be his friends.
Daryl might say that he’s not really even doing anything special besides treating his enemies with respect and kindness in the hopes of actually dissuading them from their hateful views.
Yet, that’s something almost no one else has the courage to do, even when the risks are considerably lower.
Disagreements are stressful and difficult, and the more horrifying someone else’s viewpoint is, the easier it is to dismiss the people who hold those beliefs as inhuman garbage who simply can’t be reasoned with. Social media has also made dehumanizing people considerably easier, as we all get to interact with people from around the world without ever seeing their faces or considering their feelings.
As a result, we live in an increasingly polarized time when a lot of people are saying that the only answer to hate and awful ideas is to meet them with even more hate, more anger, outrage, and even violence.
And it’s not just a problem when dealing with the worst ideas in human history like racial supremacy and fascism. Some people now take this approach for even trivial and academic disagreements.
Don’t like a speaker coming to campus? Silence them and prevent them from getting into the auditorium.
Don’t like what a Facebook friend has to say? Block them.
And of course, if you think someone you meet is a white supremacist or a neo-Nazi, the only thing left to do is punch them in the face.
Punching Doesn’t Work
But consider that most of human history is filled with people allowing their disagreements to turn into bloody, horrific warfare; it’s only our commitment to dealing with our adversaries peacefully through speech and conversation that has allowed us to become more civilized. So escalating conflicts into violence should be seen as the worst kind of social failure.
And besides, punching people who disagree with you doesn’t actually change their minds or anyone else’s, so we’re still left with the same deceptively difficult question before and after:
When people believe in wrongheaded or terrible things, how do we actually persuade them to stop believing the bad ideas, and get them to start believing in good ones instead?
Judging by social media, most people seem to believe that it’s possible to yell at people or insult and ridicule them until they change their minds. Unfortunately, as cathartic as it feels to let out your anger against awful people, this just isn’t an effective strategy to reduce the amount of people who hold awful ideas.
In fact, if you do this, your opponents (and even more people who are somewhat sympathetic to their views, or just see themselves as part of the same social group) might actually walk away even more strongly committed to their bad ideas than they were before.
The evidence from psychology is pretty clear on this.
We know from studies conducted by neuroscientists like Joseph LeDoux that people’s amygdalas — the part of the brain that processes raw emotions — can actually bypass their rational minds and create a fight-or-flight response when they feel threatened or attacked. Psychologist Daniel Goleman called this an “Amygdala Hijack,” and it doesn’t just apply to physical threats.
People’s entire personal identity is often wrapped up in their political or philosophical beliefs, and a strong verbal attack against those beliefs actually creates a response in the brain of the target similar to a menacing lunge.
Even presenting facts or arguments that directly conflict with people’s core beliefs or identities can actually cause people to cling to those beliefs more tightly after they’ve been presented with contrary evidence. Political scientists like Brendan Nyhan and Jason Reifler have been studying this phenomenon for over 10 years and call it the “Backfire Effect“.
And when the people whose minds we desperately need to change are racists and fascists (or socialists and communists, for that matter), a strategy that actually backfires and pushes more people towards those beliefs is the last thing we need.
Principles of Persuasion
The good news is that in addition to knowing what doesn’t work, we also know a lot about how to talk to people in ways that are actually persuasive — and the existing research strongly supports Daryl Davis’s approach.
One of these principles is called “reciprocity”, and it’s based on the idea that people feel obliged to treat you the way you treat them. So, if you treat them with kindness and humility, most people will offer you the same courtesy. On the other hand, if you treat them with contempt, well…
Another principle Cialdini describes is the idea of “liking”.
It’s almost too obvious, but it turns out that if someone likes you personally and believes that you like them, it’s easier to convince them that your way of thinking is worth considering. One easy step towards being liked is to listen to others and find common ground through shared interests. This can be a bridge – or a shortcut – to getting other people to see you as a friend or part of their tribe.
You might think somebody like Daryl Davis would have nothing in common with a KKK member, but according to Daryl, if you “spend 5 minutes talking to someone and you’ll find something in common,” and if you “spend 10 minutes, and you’ll find something else in common.”
In the film, he connects with several people about music, and you can see these connections paying off — breaking down barriers and providing many Klan members with a rare (and in some cases only) opportunity to interact with a black man as a human being worth respecting instead of an enemy.
Even better, over time, forming these relationships has had an interesting side-effect.
In the last couple decades alone, over 200 of America’s most ardent white supremacists have left the Ku Klux Klan and hung up their robes and hoods for good.
Many of those robes now hang in Daryl’s closet.
And in a lot of cases, these individual conversions have much bigger consequences and end multi-generational cycles of bigotry. When a mother or a father leaves the darkness of the Klan, they’re also bringing their kids into the light with them. A few of these cases are profiled in “Accidental Courtesy”, and they’re indescribably moving.
Daryl Davis can be a model for how to change people’s minds and with everything that’s going on in the world today, we need successful models now more than ever.
Making Friends From Enemies
There’s another point to all of this that I think often goes unsaid.
Unlike Daryl, most of us aren’t actually interacting with KKK members or trying to change people’s minds away from truly evil ideologies, and yet we all fall to the temptation of yelling and name-calling, and using all those techniques of influence that have the opposite of our intended or desired effect.
It’s easy to allow outrage and emotion carry us off into treating other people as inhuman enemies to be crushed rather than human beings to be persuaded.
But if Daryl’s techniques can work to convince die-hard white supremacists that a black man — and perhaps eventually all black people — are worthy of respect, imagine how effective they can be when disagreements crop up with your friends, neighbors, and co-workers who don’t actually hate you or the things you stand for.
Who knows, if you have more genuine conversations with people outside your bubble, you might even find yourself changing a little bit for the better as well.
“Accidental Courtesy” teaches us that the way to deal with wrong or evil ideas isn’t shouting them down or starting a fight; it’s having the courage to do what Daryl did and making a friend out of an enemy.
Sean Malone is the Director of Media at the Foundation for Economic Education (FEE). His films have been featured in the mainstream media and throughout the free-market educational community.
Academics like to say that we teach “critical thinking” without thinking too critically about what it means to think critically.
Being Critical, Not Thinking Critically
Too often in practice, people equate critical thinking with merely being skeptical of whatever they hear. Or they will interpret it to mean that, when confronted with someone who says something that they disagree with, they either:
a) stop listening (and perhaps then start shouting),
b) find a way to squeeze the statement into our pre-existing belief system (if we can’t we stop thinking about it), or
c) attempt to “educate” the speaker about why their statement or belief system is flawed. When this inevitably fails we stop speaking to them, at least about the subject in question.
Ultimately, each of these responses leaves us exactly where we started, and indeed stunts our intellectual growth. I confess that I do a, b, and c far too often (except I don’t really shout that much).
To me, critical thinking means, at a minimum, questioning a belief system (especially my own) by locating the premises underlying a statement or conclusion, whether we agree with it or not, and asking:
1) whether or not the thinker’s conclusions follow from those premises,
2) whether or not those premises are “reasonable,” or
3) whether or not what I consider reasonable is “reasonable” and so on.
This exercise ranges from hard to excruciatingly uncomfortable – at least when it comes to examining my own beliefs. (I’ve found that if I dislike a particular conclusion it’s hard to get myself to rigorously follow this procedure; but if I like a conclusion it’s often even harder.)
Teaching Critical Thinking
Fortunately, people have written articles and books that offer good criticisms of most of my current beliefs. Of course, it’s then up to me to read them, which I don’t do often enough. And so, unfortunately, I don’t think critically as much as I should…except when I teach economics.
It’s very important, for example, for a student to critically question her teacher, but that’s radically different from arguing merely to win. Critical thinking is argument for the sake of better understanding, and if you do it right, there are no losers, only winners.
Once in a while, a student speaks up in class and catches me in a contradiction – perhaps I’ve confused absolute advantage with comparative advantage – and that’s an excellent application of genuine critical thinking. As a result we’re both now thinking more clearly. But when a student or colleague begins a statement with something like “Well, you’re entitled to your opinion, but I believe…” that person may be trying to be critical (of me) but not in (or of) their thinking.
It may not be the best discipline for this, but I believe economics does a pretty good job of teaching critical thinking in the sense of #1 (logical thinking). Good teachers of economics will also strategically address #2 (evaluating assumptions), especially if they know something about the history of economic ideas.
Economics teachers with a philosophical bent will sometimes address #3 but only rarely (otherwise they’d be trading off too much economic content for epistemology). In any case, I don’t think it’s possible to “get to the bottom” of what is “reasonable reasonableness” and so on because what ultimately is reasonable may, for logical or practical reasons, always lie beyond our grasp.
I could be wrong about that or indeed any of this. But I do know that critical thinking is a pain in the neck. And that I hope is a step in the right direction.
Sanford (Sandy) Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.