Browsed by
Tag: progress

How Transhumanism Can Transcend Socialism, Libertarianism, and All Other Conventional Ideologies – Gennady Stolyarov II Presents at the VSIM:18 Conference

How Transhumanism Can Transcend Socialism, Libertarianism, and All Other Conventional Ideologies – Gennady Stolyarov II Presents at the VSIM:18 Conference

Gennady Stolyarov II


Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, discusses the key strengths and weaknesses of libertarianism, socialism, conservatism, and left-liberalism, the common failings of these and all other conventional ideologies, and why transhumanism offers a principled, integrated, dynamic approach for a new era of history, which can overcome all of these failings.

This presentation was delivered virtually by Mr. Stolyarov on September 13, 2018, to the Vanguard Scientific Instruments in Management 2018 (VSIM:18) conference in Ravda, Bulgaria. Afterward, a discussion ensured, in which Professor Angel Marchev, Sr., the conference organizer and the U.S. Transhumanist Party’s Ambassador to Bulgaria, offered his views on the dangers of socialism and the promise of transhumanism, followed by a brief question-and-answer period.

Visit the website of the U.S. Transhumanist Party here.

Download and view the slides of Mr. Stolyarov’s presentation (with hyperlinks) here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form here.

Become a Foreign Ambassador for the U.S. Transhumanist Party. Apply here.

The Rational Argumentator’s Sixteenth Anniversary Manifesto

The Rational Argumentator’s Sixteenth Anniversary Manifesto

The New Renaissance Hat
G. Stolyarov II
September 2, 2018
******************************

On August 31, 2018, The Rational Argumentator completed its sixteenth year of publication. TRA is older than Facebook, YouTube, Twitter, and Reddit; it has outlasted Yahoo! Geocities, Associated Content, Helium, and most smaller online publications in philosophy, politics, and current events. Furthermore, the age of TRA now exceeds half of my lifetime to date. During this time, while the Internet and the external world shifted dramatically many times over, The Rational Argumentator strived to remain a bulwark of consistency – accepting growth in terms of improvement of infrastructure and accumulation of content, but not the tumultuous sweeping away of the old to ostensibly make room for the new. We do not look favorably upon tumultuous upheaval; the future may look radically different from the past and present, but ideally should be built in continuity with both, and with preservation of any beneficial aspects that can possibly be preserved.

The Rational Argumentator has experienced unprecedented visitation during its sixteenth year, receiving 1,501,473 total page views as compared to 1,087,149 total page views during its fifteenth year and 1,430,226 during its twelfth year, which had the highest visitation totals until now. Cumulative lifetime TRA visitation has reached 12,481,258 views. Even as TRA’s publication rate has slowed to 61 features during its sixteenth year – due to various time commitments, such as the work of the United States Transhumanist Party (which published 147 features on its website during the same timeframe) – the content of this magazine has drawn increasing interest. Readers, viewers, and listeners are gravitating toward both old and new features, as TRA generally aims to publish works of timeless relevance. The vaster our archive of content, the greater variety of works and perspectives it spans, the more issues it engages with and reflects upon – the more robust and diverse our audience becomes; the more insulated we become against the vicissitudes of the times and the fickle fluctuations of public sentiment and social-media fads.

None of the above is intended to deny or minimize the challenges faced by those seeking to articulate rational, nuanced, and sophisticated ideas on the contemporary Internet. Highly concerning changes to the consumption and availability of information have occurred over the course of this decade, including the following trends.

  • While social media have been beneficial in terms of rendering personal communication at a distance more viable, the fragmentation of social media and the movement away from the broader “open Internet” have seemingly accelerated. Instead of directly navigating and returning to websites of interest, most people now access content almost exclusively through social-media feeds. Even popular and appealing content may often become constrained within the walls of a particular social network or sub-group thereof, simply due to the “black-box” algorithms of that social network, which influence without explanation who sees what and when, and which may not be reflective of what those individuals would have preferred to see. The constantly changing nature of these algorithms renders it difficult for content creators to maintain steady connections with their audiences. If one adds to the mix the increasing and highly troubling tendency of social networks to actively police the content their members see, we may be returning to a situation where most people find their content inexplicably curated by “gatekeepers” who, in the name of objectivity and often with unconscious biases in play, often end up advancing ulterior agendas not in the users’ interests.
  • While the democratization of access to knowledge and information on the Internet has undoubtedly had numerous beneficial effects, we are also all faced with the problem of “information overload” and the need to prioritize essential bits information within an immense sea which we observe daily, hourly, and by the minute. The major drawback of this situation – in which everyone sees everything in a single feed, often curated by the aforementioned inexplicable algorithms – is the difficulty of even locating information that is more than a day old, as it typically becomes buried far down within the social-media feed. Potential counters exist to this tendency – namely, through the existence of old-fashioned, static websites which publish content that does not adjust and that is fixed to a particular URL, which could be bookmarked and visited time and again. But what proportion of the population has learned this technique of bookmarking and revisitation of older content – instead of simply focusing on the social-media feed of the moment? It is imperative to resist the short-termist tendencies that the design of contemporary social media seems to encourage, as indulging these tendencies has had deleterious impacts on attention spans in an entire epoch of human culture.
  • Undeniably, much interesting and creative content has proliferated on the Internet, with opportunities for both deliberate and serendipitous learning, discovery, and intellectual enrichment. Unfortunately, the emergence of such content has coincided with deleterious shifts in cultural norms away from the expectation of concerted, sequential focus (the only way that human minds can actually achieve at a high level) and toward incessant multi-tasking and the expectation of instantaneous response to any external stimulus, human or automated. The practice of dedicating a block of time to read an article, watch a video, or listen to an audio recording – once a commonplace behavior – has come to be a luxury for those who can wrest segments of time and space away from the whirlwind of external stimuli and impositions within which humans (irrespective of material resources or social position) are increasingly expected to spin. It is fine to engage with others and venture into digital common spaces occasionally or even frequently, but in order for such interactions to be productive, one has to have meaningful content to offer; the creation of such content necessarily requires time away from the commons and a reclamation of the concept of private, solitary focus to read, contemplate, apply, and create.
  • In an environment where the immediate, recent, and short-term-oriented content tends to attract the most attention, this amplifies the impulsive, range-of-the-moment, reactive emotional tendencies of individuals, rather than the thoughtful, long-term-oriented, constructive, rational tendencies. Accordingly, political and cultural discourse become reduced to bitter one-liners that exacerbate polarization, intentional misunderstanding of others, and toxicity of rhetoric. The social networks where this has been most salient have been those that limit the number of characters per post and prioritize quantity of posts over quality and the instantaneity of a response over its thoughtfulness. The infrastructures whose design presupposes that everyone’s expressions are of equal value have produced a reduction of discourse to the lowest common denominator, which is, indeed, quite low. Even major news outlets, where some quality selection is still practiced by the editors, have found that user comments often degenerate into a toxic morass. This is not intended to deny the value of user comments and interaction, in a properly civil and constructive context; nor is it intended to advocate any manner of censorship. Rather, this observation emphatically underscores the need for a return to long-form, static articles and longer written exchanges more generally as the desirable prevailing form of intellectual discourse. (More technologically intensive parallels to this long-form discourse would include long-form audio podcasts or video discussion panels where there is a single stream of conversation or narrative instead of a flurry of competing distractions.) Yes, this form of discourse takes more time and skill. Yes, this means that people have to form complex, coherent thoughts and express them in coherent, grammatically correct sentences. Yes, this means that fewer people will have the ability or inclination participate in that form of discourse. And yes, that may well be the point – because less of the toxicity will make its way completely through the structures which define long-form discourse – and because anyone who can competently learn the norms of long-form discourse, as they have existed throughout the centuries, will remain welcome to take part. Those who are not able or willing to participate can still benefit by spectating and, in the process, learning and developing their own skills.

The Internet was intended, by its early adopters and adherents of open Internet culture – including myself – to catalyze a new Age of Enlightenment through the free availability of information that would break down old prejudices and enable massively expanded awareness of reality and possibilities for improvement. Such a possibility remains, but humans thus far have fallen massively short of realizing it – because the will must be present to utilize constructively the abundance of available resources. Cultivating this will is no easy task; The Rational Argumentator has been pursuing it for sixteen years and will continue to do so. The effects are often subtle, indirect, long-term – more akin to the gradual drift of continents than the upward ascent of a rocket. And yet progress in technology, science, and medicine continues to occur. New art continues to be created; new treatises continue to be written. Some people do learn, and some people’s thinking does improve. There is no alternative except to continue to act in pursuit of a brighter future, and in the hope that others will pursue it as well – that, cumulatively, our efforts will be sufficient to avert the direst crises, make life incrementally safer, healthier, longer, and more comfortable, and, as a civilization, persist beyond the recent troubled times. The Rational Argumentator is a bulwark against the chaos – hopefully one among many – and hopefully many are at work constructing more bulwarks. Within the bulwarks, great creations may have room to develop and flourish – waiting for the right time, once the chaos subsides or is pacified by Reason, to emerge and beautify the world. In the meantime, enjoy all that can be found within our small bulwark, and visit it frequently to help it expand.

Gennady Stolyarov II,
Editor-in-Chief, The Rational Argumentator

This essay may be freely reproduced using the Creative Commons Attribution Share-Alike International 4.0 License, which requires that credit be given to the author, G. Stolyarov II. Find out about Mr. Stolyarov here.

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

G. Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Ideas for Technological Solutions to Destructive Climate Change – Article by G. Stolyarov II

Ideas for Technological Solutions to Destructive Climate Change – Article by G. Stolyarov II

G. Stolyarov II



Destructive climate change is no longer a hypothesis or mere possibility; rather, the empirical evidence for it has become apparent in the form of increasingly frequent extremes of temperature and natural disasters – particularly the ongoing global heat wave and major wildfires occurring in diverse parts of the world. In each individual incident, it is difficult to pinpoint “climate change” as a singular cause, but climate change can be said to exacerbate the frequency and severity of the catastrophes that arise. Residing in Northern Nevada for the past decade has provided me ample empirical evidence of the realities of deleterious climate change. Whereas there were no smoke inundations from California wildfires during the first four summers of my time in Northern Nevada, the next six consecutive summers (2013-2018) were all marked by widespread, persistent inflows of smoke from major wildfires hundreds of kilometers away, so as to render the air quality here unhealthy for long periods of time. From a purely probabilistic standpoint, the probability of this prolonged sequence of recent but consistently recurring smoke inundations would be minuscule in the absence of some significant climate change. Even in the presence of some continued debate over the nature and causes of climate change, the probabilities favor some action to mitigate the evident adverse effects and to rely on the best-available scientific understanding to do so, even with the allowance that the scientific understanding will evolve and hopefully become more refined over time – as good science does. Thus, it is most prudent to accept that there is deleterious climate change and that at least a significant contribution to it comes from emissions of certain gases, such as carbon dioxide and methane, into the atmosphere as a result of particular human activities, the foremost of which is the use of fossil fuels. This is not an indictment of human beings, nor even of fossil fuels per se, but rather an indication that the deleterious side effects of particular activities should be prevented or alleviated through further human activity and ingenuity.

Yet one of the major causes of historical reluctance among laypersons, especially in the United States, to accept the findings of the majority of climate scientists has been the misguided conflation by certain activists (almost always on the political Left) of the justifiable need to prevent or mitigate the effects of climate change with specific policy recommendations that are profoundly counterproductive to that purpose and would only increase the everyday suffering of ordinary people without genuinely alleviating deleterious climate change. The policy recommendations of this sort have historically fallen into two categories: (i) Neo-Malthusian, “back to nature” proposals to restrict the use of advanced technologies and return to more primitive modes of living; and (ii) elaborate economic manipulations, such as the creation of artificial markets in “carbon credits”, or the imposition of a carbon tax or a related form of “Pigovian tax” – ostensibly to associate the “negative externalities” of greenhouse-gas emissions with a tangible cost. The Neo-Malthusian “solutions” would, in part deliberately, cause extreme detriments to most people’s quality of life (for those who remain alive), while simultaneously resulting in the use of older, far more environmentally destructive techniques of energy generation, such as massive deforestation or the combustion of animal byproducts. The Neo-Pigovian economic manipulations ignore how human motives and incentives actually work and are far too indirect and contingent on a variety of assumptions that are virtually never likely to hold in practice. At the same time, the artificially complex structures that these economic manipulations inevitably create would pose obstructions to the direct deployment of more straightforward solutions by entangling such potential solutions in an inextricable web of compliance interdependencies.

The solutions to destructive climate change are ultimately technological and infrastructural.  No single device or tactic – and certainly no tax or prohibition – can comprehensively combat a problem of this magnitude and variety of impacts. However, a suite of technologically oriented approaches – pushing forward the deployment and quality of the arsenal of tools available to humankind – could indeed arrest and perhaps reverse the course of deleterious climate change by directly reducing the emissions of greenhouse gases and/or directly alleviating the consequences of increased climate variability.

Because both human circumstances and current as well as potential technologies are extremely diverse, no list of potential solutions to deleterious climate change can ever be exhaustive. Here I attempt the beginnings of such a list, but I invite others to contribute additional technologically oriented solutions as well. There are only two constraints on the kinds of solutions that can feasibly and ethically combat deleterious climate change – but those constraints are of immense importance:

Constraint 1. The solutions may not result in a net detriment to any individual human’s length or material quality of life.

Constraint 2. The solutions may not involve the prohibition of technologies or the restraint of further technological progress.

Constraint 1 implies that any solution to deleterious climate change will need to be a Pareto-efficient move, in that at least one person should benefit, while no person should suffer a detriment (or at least a detriment that has not been satisfactorily compensated for in that person’s judgment). Constraint 2 implies a techno-optimistic and technoprogressive perspective on combating deleterious climate change: we can do it without restrictions or prohibitions, but rather through innovations that will benefit all humans. Some technologies, particularly those associated with the extraction and use of fossil fuels, may gradually be consigned to obsolescence and irrelevance with this approach, but this will be due to their voluntary abandonment once superior, more advanced technological alternatives become widespread and economical to deploy. The more freedom to innovate and active acceleration of technological progress exist, the sooner that stage of fossil-fuel obsolescence could be reached. In the meantime, some damaging events are unfortunately unavoidable (as are many natural catastrophes more generally in our still insufficiently advanced era), but a variety of approaches can be deployed to at least prevent or reduce some damage that would otherwise arise.

If humanity solves the problems of deleterious climate change, it can only be with the mindset that solutions are indeed achievable, and they are achievable without compromising our progress or standards of living. We must be neither defeatists nor reactionaries, but rather should proactively accelerate the development of emerging technologies to meet this challenge by actualizing the tremendous creative potential our minds have to offer.

What follows is the initial list of potential solutions. Long may it grow.

Direct Technological Innovation

  • Continued development of economical solar and wind power that could compete with fossil fuels on the basis of cost alone.
  • Continued development of electric vehicles and increases in their range, as well as deployment of charging stations throughout all inhabited areas to enable recharging to become as easy as a refueling a gasoline-powered vehicle.
  • Development of in vitro (lab-grown) meat that is biologically identical to currently available meat but does not require actual animals to die. Eventually this could lead the commercial raising of cattle – which contribute significantly to methane emissions – to decline substantially.
  • Development of vertical farming to increase the amount of arable land indoors – rendering more food production largely unaffected by climate change.
  • Autonomous vehicles offered as services by transportation network companies – reducing the need for direct car ownership in urban areas.
  • Development and spread of pest-resistant, drought-resistant genetically modified crops that require less intensive cultivation techniques and less application of spray pesticides, and which can also flourish in less hospitable climates.
  • Construction of hyperloop transit networks among major cities, allowing rapid transit without the pollution generated by most automobile and air travel. Hyperloop networks would also allow for more rapid evacuation from a disaster area.
  • Construction of next-generation, meltdown-proof nuclear-power reactors, including those that utilize the thorium fuel cycle. It is already possible today for most of a country’s electricity to be provided through nuclear power, if only the fear of nuclear energy could be overcome. However, the best way to overcome the fear of nuclear energy is to deploy new technologies that eliminate the risk of meltdown. In addition to this, technologies should be developed to reprocess nuclear waste and to safely re-purpose dismantled nuclear weapons for civilian energy use.
  • Construction of smart infrastructure systems and devices that enable each building to use available energy with the maximum possible benefit and minimum possible waste, while also providing opportunities for the building to generate its own renewable energy whenever possible.
  • In the longer term, development of technologies to capture atmospheric carbon dioxide and export it via spaceships to the Moon and Mars, where it could be released as part of efforts to generate a greenhouse effect and begin terraforming these worlds.

Disaster Response

  • Fire cameras located at prominent vantage points in any area of high fire risk – perhaps linked to automatic alerts to nearby fire departments and sprinkler systems built into the landscape, which might be auto-activated if a sufficiently large fire is detected in the vicinity.
  • Major increases in recruitment of firefighters, with generous pay and strategic construction of outposts in wilderness areas. Broad, paved roads need to lead to the outposts, allowing for heavy equipment to reach the site of a wildfire easily.
  • Development of firefighting robots to accompany human firefighters. The robots would need to be constructed from fire-resistive materials and have means of transporting themselves over rugged terrain (e.g., tank treads).
  • Design and deployment of automated firefighting drones – large autonomous aircraft that could carry substantial amounts of water and/or fire-retardant sprays.

Disaster Prevention

  • Recruitment of large brush-clearing brigades to travel through heavily forested areas – particularly remote and seldom-accessed ones – and clear dead vegetation as well as other wildfire fuels. This work does not require significant training or expertise and so could offer an easy job opportunity for currently unemployed or underemployed individuals. In the event of shortages of human labor, brush-clearing robots could be designed and deployed. The robots could also have the built-in capability to reprocess dead vegetation into commercially usable goods – such as mulch or wood pellets. Think of encountering your friendly maintenance robot when hiking or running on a trail!
  • Proactive creation of fire breaks in wilderness areas – not “controlled burns” (which are, in practice, difficult to control) but rather controlled cuts of smaller, flammable brush to reduce the probability of fire spreading. Larger trees of historic significance should be spared, but with defensible space created around them.
  • Deployment of surveillance drones in forested areas, to detect behaviors such as vandalism or improper precautions around manmade fires – which are often the causes of large wildfires.
  • Construction of large levees throughout coastal regions – protecting lowland areas from flooding and achieving in the United States what has been achieved in the Netherlands over centuries on a smaller scale. Instead of building a wall at the land border, build many walls along the coasts!
  • Construction of vast desalination facilities along ocean coasts. These facilities would take in ocean water, thereby counteracting the effects of rising water levels, then purify the water and transmit it via a massive pipe network throughout the country, including to drought-prone regions. This would mitigating multiple problems, reducing the excess of water in the oceans while replenishing the deficit of water in inland areas.
  • Creation of countrywide irrigation and water-pipeline networks to spread available water and prevent drought wherever it might arise.

Economic Policies

  • Redesign of home insurance policies and disaster-mitigation/recovery grants to allow homeowners who lost their homes to natural disasters to rebuild in different, safer areas.
  • Development of workplace policies to encourage telecommuting and teleconferencing, including through immersive virtual-reality technologies that allow for plausible simulacra of in-person interaction. The majority of business interactions can be performed virtually, eliminating the need for much business-related commuting and travel.
  • Elimination of local and regional monopoly powers of utility companies in order to allow alternative-energy utilities, such as companies specializing in the installation of solar panels, to compete and offer their services to homeowners independently of traditional utilities.
  • Establishment of consumer agencies (public or private) that review products for durability and encourage the construction of devices that lack “planned obsolescence” but rather can be used for decades with largely similar effect.
  • Establishment of easily accessible community repair shops where old devices and household goods can be taken to be repaired or re-purposed instead of being discarded.
  • Abolition of inflexible zoning regulations and overly prescriptive building codes; replacement with a more flexible system that allows a wide variety of innovative construction techniques, including disaster-resistant and sustainable construction methods, tiny homes, homes created from re-purposed materials, and mixed-use residential/commercial developments (which also reduce the need for vehicular commuting).
  • Abolition of sales taxes on energy-efficient consumer goods.
  • Repeal or non-enactment of any mileage-based taxes for electric or hybrid vehicles, thereby resulting in such vehicles becoming incrementally less expensive to operate.
  • Lifting of all bans and restrictions on genetically modified plants and animals – which are a crucial component in adaptation to climate change and in reducing the carbon footprint of agricultural activities.

Harm Mitigation

  • Increases in planned urban vegetation through parks, rooftop gardens, trees planted alongside streets, pedestrian / bicyclist “greenways” lined with vegetation. The additional vegetation can absorb carbon dioxide, reducing the concentrations in the atmosphere.
  • Construction of additional pedestrian / bicyclist “greenways”, which could help reduce the need for vehicular commutes.
  • Construction of always-operational disaster shelters with abundant stockpiles of aid supplies, in order to prevent the delays in deployment of resources that occur during a disaster. When there is no disaster, the shelters could perform other valuable tasks that generally are not conducive to market solutions, such as litter cleanup in public spaces or even offering inexpensive meeting space to various individuals and organizations. (This could also contribute to the disaster shelters largely becoming self-funding in calm times.)
  • Provision of population-wide free courses on disaster preparation and mitigation. The courses could have significant online components as well as in-person components administered by first-aid and disaster-relief organizations.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Fourth Enlightenment Salon – Gennady Stolyarov II, Bill Andrews, Bobby Ridge, and John Murrieta Discuss Transhumanist Outreach and Curing Disabilities

Fourth Enlightenment Salon – Gennady Stolyarov II, Bill Andrews, Bobby Ridge, and John Murrieta Discuss Transhumanist Outreach and Curing Disabilities

Gennady Stolyarov II
Bill Andrews
Bobby Ridge
John Murrieta


On July 8, 2018, during his Fourth Enlightenment Salon, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, invited John Murrieta, Bobby Ridge, and Dr. Bill Andrews for an extensive discussion about transhumanist advocacy, science, health, politics, and related subjects. In this first of several installments from the Fourth Enlightenment Salon, the subjects of conversation include the following:

• The U.S. Transhumanist Party’s recent milestone of 1,000 members and what this portends for outreach toward the general public regarding the meaning of transhumanism and the many ways in which emerging technologies help make life better.

• The new channel – Science-Based Species – launched by Bobby and John to spread basic knowledge about transhumanism, key thinkers in the movement, and advances on the horizon.

• How today’s technologies to assist the disabled are already transhumanist in their effects, and how technologies already in development can liberate humans from disability altogether. John Murrieta’s story is one of transhumanism literally saving a life – and one of the most inspiring examples of how transhumanism translates into human well-being now and in the future.

Join the U.S. Transhumanist Party for free, no matter where you reside by filling out an application form that takes less than a minute. Members will also receive a link to a free compilation of Tips for Advancing a Brighter Future, providing insights from the U.S. Transhumanist Party’s Advisors and Officers on some of what you can do as an individual do to improve the world and bring it closer to the kind of future we wish to see.

U.S. Transhumanist Party Chairman Gennady Stolyarov II Answers Common Interview Questions

U.S. Transhumanist Party Chairman Gennady Stolyarov II Answers Common Interview Questions

Gennady Stolyarov II


Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party and Chief Executive of the Nevada Transhumanist Party, answers questions posed by Francesco Sacco, which are representative of common points of inquiry regarding transhumanism and the Transhumanist Party:

1. What is Transhumanism and what inspired you to follow it?
2. What are the long-term goals of the Transhumanist party?
3. What are your thoughts on death and eternal life through technological enhancements?
4. Do you feel there are any disadvantages to having access to the cure for death? What advantages are there?

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form here.

See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Bobby Ridge
Gennady Stolyarov II


Bobby Ridge, Secretary-Treasurer of the U.S. Transhumanist Party, and Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, provide a broad “big-picture” overview of transhumanism and major ongoing and future developments in emerging technologies that present the potential to revolutionize the human condition and resolve the age-old perils and limitations that have plagued humankind.

This is a beginners’ overview of transhumanism – which means that it is for everyone, including those who are new to transhumanism and the life-extension movement, as well as those who have been involved in it for many years – since, when it comes to dramatically expanding human longevity and potential, we are all beginners at the beginning of what could be our species’ next great era.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside.

See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.

In the background of some of the video segments is a painting now owned by Mr. Stolyarov, from “The Singularity is Here” series by artist Leah Montalto.

U.S. Transhumanist Party / Institute of Exponential Sciences Discussion Panel on Cryptocurrencies

U.S. Transhumanist Party / Institute of Exponential Sciences Discussion Panel on Cryptocurrencies

Gennady Stolyarov II
Demian Zivkovic
Chantha Lueung
Laurens Wes
Moritz Bierling


On Sunday, February 18, 2018, the U.S. Transhumanist Party and Institute of Exponential Sciences hosted an expert discussion panel on how cryptocurrencies and blockchain-based technologies will possibly affect future economies and everyday life. Panelists were asked about their views regarding what is the most significant promise of cryptocurrencies, as well as what are the most significant current obstacles to its realization.

Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, and Demian Zivkovic, President of the Institute of Exponential Sciences, are the moderators for this panel.

Panelists

Moritz Bierling

Moritz Bierling, in his work for Exosphere Academy – a learning and problem-solving community – has organized a Space Elevator bootcamp, an Artificial Intelligence conference, and an Ethereum training course while also authoring a Primer on the emerging discipline of Alternate Reality Design. As Blockchain Reporter for the Berlin blockchain startup Neufund, he has educated the city’s Venture Capital and startup scene, as well as the broader public on the applications of this groundbreaking technology. His work has appeared in a number of blockchain-related and libertarian media outlets such as CoinTelegraph, The Freeman’s Perspective, Bitcoin.com, and the School Sucks Project. See his website at MoritzBierling.com.

Chantha Lueung

Chantha Lueung is the creator of Crypto-city.com, which is a social-media website focused on building the future world of cryptocurrencies by connecting crypto-enthusiasts and the general public about cryptocurrencies. He is a full-time trader and also participates in the HyperStake coin project, which is a Bitcoin alternative that uses the very energy-efficient Proof of Stake protocol, also known as POS.

Laurens Wes

Laurens Wes is a Dutch engineer and chief engineering officer at the Institute of Exponential Sciences. Furthermore he is the owner of Intrifix, a company focused on custom 3D-printed products and software solutions. He has also studied Artificial Intelligence and is very interested in transhumanism, longevity, entrepreneurship, cryptocurrencies/blockchain technology, and art (and a lot more). He is a regular speaker for the IES and is very committed to educating the public on accelerated technological developments and exponential sciences.

The YouTube question/comment chat for this Q&A session has been archived here and is also provided below.

Visit the U.S. Transhumanist Party Facebook page here.

See the U.S. Transhumanist Party FAQ here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside.

Become a Foreign Ambassador for the U.S. Transhumanist Party.

References

Chat Log from the Panel Discussion on Cryptocurrencies of February 18, 2018

Read More Read More

California Transhumanist Party Leadership Meeting – Presentation by Newton Lee and Discussion on Transhumanist Political Efforts

California Transhumanist Party Leadership Meeting – Presentation by Newton Lee and Discussion on Transhumanist Political Efforts

Newton Lee
Gennady Stolyarov II
Bobby Ridge
Charlie Kam


The California Transhumanist Party held its inaugural Leadership Meeting on January 27, 2018. Newton Lee, Chairman of the California Transhumanist Party and Education and Media Advisor of the U.S. Transhumanist Party,  outlined the three Core Ideals of the California Transhumanist Party (modified versions of the U.S. Transhumanist Party’s Core Ideals), the forthcoming book “Transhumanism: In the Image of Humans” – which he is curating and which will contain essays from leading transhumanist thinkers in a variety of realms, and possibilities for outreach, future candidates, and collaboration with the U.S. Transhumanist Party and Transhumanist Parties in other States. U.S. Transhumanist Party Chairman Gennady Stolyarov II contributed by providing an overview of the U.S. Transhumanist Party’s current operations and possibilities for running or endorsing candidates for office in the coming years.

Visit the website of the California Transhumanist Party:http://www.californiatranshumanistparty.org/index.html

Read the U.S. Transhumanist Party Constitution: http://transhumanist-party.org/constitution/

Become a member of the U.S. Transhumanist Party for free: http://transhumanist-party.org/membership/

(If you reside in California, this would automatically render you a member of the California Transhumanist Party.)

Review of Philip Tetlock’s “Superforecasting” – Article by Adam Alonzi

Review of Philip Tetlock’s “Superforecasting” – Article by Adam Alonzi

The New Renaissance Hat
Adam Alonzi
******************************
Alexander Consulting the Oracle of Apollo, Louis Jean Francois Lagrenée. 1789, Oil on Canvas.

“All who drink of this treatment recover in a short time, except those whom it does not help, who all die. It is obvious, therefore, that it fails only in incurable cases.”

-Galen

Before the advent of evidence-based medicine, most physicians took an attitude like Galen’s toward their prescriptions. If their remedies did not work, surely the fault was with their patient. For centuries scores of revered doctors did not consider putting bloodletting or trepanation to the test. Randomized trials to evaluate the efficacy of a treatment were not common practice. Doctors like Archie Cochrane, who fought to make them part of standard protocol, were met with fierce resistance. Philip Tetlock, author of Superforecasting: The Art and Science of Prediction (2015), contends that the state of forecasting in the 21st century is strikingly similar to medicine in the 19th. Initiatives like the Good Judgement Project (GJP), a website that allows anyone to make predictions about world events, have shown that even a discipline that is largely at the mercy of chance can be put on a scientific footing.

More than once the author reminds us that the key to success in this endeavor is not what you think or what you know, but how you think. For Tetlock pundits like Thomas Friedman are the “exasperatingly evasive” Galens of the modern era. In the footnotes he lets the reader know he chose Friedman as target strictly because of his prominence. There are many like him. Tetlock’s academic work comparing random selections with those of professionals led media outlets to publish, and a portion of their readers to conclude, that expert opinion is no more accurate than a dart-throwing chimpanzee. What the undiscerning did not consider, however, is not all of the experts who participated failed to do better than chance.

Daniel Kahneman hypothesized that “attentive readers of the New York Times…may be only slightly worse” than these experts corporations and governments so handsomely recompense. This turned out to be a conservative guess. The participants in the Good Judgement Project outperformed all control groups, including one composed of professional intelligence analysts with access to classified information. This hodgepodge of retired bird watchers, unemployed programmers, and news junkies did 30% better than the “pros.” More importantly, at least to readers who want to gain a useful skillset as well as general knowledge, the managers of the GJP have identified qualities and ways of thinking that separate “superforecasters” from the rest of us. Fortunately they are qualities we can all cultivate.

While the merits of his macroeconomic theories can be debated, John Maynard Keynes was an extremely successful investor during one of the bleakest periods in international finance. This was no doubt due in part to his willingness to make allowance for new information and his grasp of probability. Participants in the GJP display open-mindedness, an ability and willingness to repeatedly update their forecasts, a talent to neither under- nor over-react to new information by putting it into a broader context,  and a predilection for mathematical thinking (though those interviewed admitted they rarely used an explicit equation to calculate their answer). The figures they give also tend to be more precise than their less successful peers. This “granularity” may seem ridiculous at first. I must confess that when I first saw estimates on the GJP of 34% or 59%, I would chuckle a bit. How, I asked myself, is a single percentage point meaningful? Aren’t we just dealing with rough approximations? Apparently not.

Tetlock reminds us that the GJP does not deal with nebulous questions like “Who will be president in 2027?” or “Will a level 9 earthquake hit California two years from now?” However, there are questions that are not, in the absence of unforeseeable Black Swan events, completely inscrutable. Who will win the Mongolian presidency? Will Uruguay sign a trade agreement with Laos in the next six months? These are parts of highly complex systems, but they can be broken down into tractable subproblems.

Using numbers instead of words like “possibly”, “probably”, “unlikely”, etc., seems unnatural. It gives us wiggle room and plausible deniability. They also cannot be put on any sort of record to keep score of how well we’re doing. Still, to some it may seem silly, pedantic, or presumptuous. If Joint Chiefs of Staff had given the exact figure they had in mind (3 to 1) instead of the “fair chance” given to Kennedy, the Bay of Pigs debacle may have never transpired. Because they represent ranges of values instead of single numbers, words can be retroactively stretched or shrunk to make blunders seem a little less avoidable. This is good for advisors looking to cover their hides by hedging their bets, but not so great for everyone else.

If American intelligence agencies had presented the formidable but vincible figure of 70% instead of a “slam dunk” to Congress, a disastrous invasion and costly occupation would have been prevented. At this point it is hard not to see the invasion as anything as a mistake, but even amidst these emotions we must be wary of hindsight. Still, a 70% chance of being right means there is a 30% chance of being wrong. It is hardly a “slam dunk.” No one would feel completely if an oncologist told them they are 70% sure the growth is not malignant. There are enormous consequences to sloppy communications. However, those with vested interests are more than content with this approach if it agrees with them, even if it ends up harming them.

When Nate Silver put the odds of the 2008 election in Obama’s favor, he was panned by Republicans as a pawn of the liberal media. He was quickly reviled by Democrats when he foresaw a Republican takeover of the Senate. It is hard to be a wizard when the king, his court, and all the merry peasants sweeping the stables would not know a confirmation bias from their right foot. To make matters worse, confidence is widely equated with capability. This seems to be doubly true of groups of people, particularly when they are choosing a leader. A mutual-fund manager who tells his clients they will see great returns on a company is viewed as stronger than a Poindexter prattling on about Bayesian inference and risk management.

The GJP’s approach has not spread far — yet. At this time most pundits, consultants, and self-proclaimed sages do not explicitly quantify their success rates, but this does not stop corporations, NGOs, and institutions at all levels of government from paying handsomely for the wisdom of untested soothsayers. Perhaps they have a few diplomas, but most cannot provide compelling evidence for expertise in haruspicy (sans the sheep’s liver). Given the criticality of accurate analyses to saving time and money, it would seem as though a demand for methods to improve and assess the quality of foresight would arise. Yet for the most part individuals and institutions continue to happily grope in the dark, unaware of the necessity for feedback when they misstep — afraid of having their predictions scrutinized or having to take the pains to scrutinize their predictions.

David Ferrucci is wary of the “guru model” to settling disputes. No doubt you’ve witnessed or participated in this kind of whimpering fracas: one person presents a Krugman op-ed to debunk a Niall Ferguson polemic, which is then countered with a Tommy Friedman book, which was recently excoriated  by the newest leader of the latest intellectual cult to come out of the Ivy League. In the end both sides leave frustrated. Krugman’s blunders regarding the economic prospects of the Internet, deflation, the “imminent” collapse of the euro (said repeatedly between 2010 and 2012) are legendary. Similarly, Ferguson, who strongly petitioned the Federal Reserve to reconsider quantitative easing, lest the United States suffer Weimar-like inflation, has not yet been vindicated. He and his colleagues responded in the same way as other embarrassed prophets: be patient, it has not happened, but it will! In his defense, more than one clever person has criticized the way governments calculate their inflation rates…

Paul Ehrlich, a darling of environmentalist movement, has screeched about the detonation of a “population bomb” for decades. Civilization was set to collapse between 15 and 30 years from 1970. During the interim 100 to 200 million would annually starve to death, by the year 2000 no crude oil would be left, the prices of raw materials would skyrocket, and the planet would be in the midst of a perpetual famine. Tetlock does not mention Ehrlich, but he is, particularly given his persisting influence on Greens, as or more deserving of a place in this hall of fame as anyone else. Larry Kudlow continued to assure the American people that the Bush tax breaks were producing massive economic growth. This continued well into 2008, when he repeatedly told journalists that America was not in a recession and the Bush boom was “alive and well.” For his stupendous commitment to his contention in the face of overwhelming evidence to the contrary, he was nearly awarded a seat in the Trump cabinet.

This is not to say a mistake should become the journalistic equivalent of a scarlet letter. Kudlow’s slavish adherence to his axioms is not unique. Ehrlich’s blindness to technological advances is not uncommon, even in an era dominated by technology. By failing to set a timeline or give detailed causal accounts, many believe they have predicted every crash since they learned how to say the word. This is likely because they begin each day with the same mantra: “the market will crash.”  Yet through an automatically executed routine of psychological somersaults, they do not see they were right only once and wrong dozens, hundreds, or thousands of times. This kind of person is much more deserving of scorn than a poker player who boasts about his victories, because he is (likely) also aware of how often he loses. At least he’s not fooling himself. The severity of Ehrlich’s misfires is a reminder of what happens when someone looks too far ahead while assuming all things will remain the same. Ceteris paribus exists only in laboratories and textbooks.

Axioms are fates accepted by different people as truth, but the belief in Fate (in the form of retroactive narrative construction) is a nearly ubiquitous stumbling block to clear thinking. We may be far removed from Sophocles, but the unconscious human drive to create sensible narratives is not peculiar to fifth-century B.C. Athens. A questionnaire given to students at Northwestern showed that most believed things had turned out for the best even if they had gotten into their first pick. From an outsider’s perspective this is probably not true. In our cocoons we like to think we are in the right place either through the hand of fate or through our own choices. Atheists are not immune to this Panglossian habit. Our brains are wired for stories, but the stories we tell ourselves about ourselves seldom come out without distortions. We can gain a better outside view, which allows us to see situations from perspectives other than our own, but only through regular practice with feedback. This is one of the reasons groups are valuable.

Francis Galton asked 787 villagers to guess the weight of an ox hanging in the market square. The average of their guesses (1,197 lbs) turned out to be remarkably close to its actual weight (1,198 lbs). Scott Page has said “diversity trumps ability.” This is a tad bold, since legions of very different imbeciles will never produce anything of value, but there is undoubtedly a benefit to having a group with more than one point of view. This was tested by the GJP. Teams performed better than lone wolves by a significant margin (23% to be exact). Partially as a result of encouraging one another and building a culture of excellence, and partially from the power of collective intelligence.

“No battle plan survives contact with the enemy.”

-Helmuth von Moltke

“Everyone has a plan ’till they get punched in the mouth.”

-Mike Tyson

When Archie Cochrane was told he had cancer by his surgeon, he prepared for death. Type 1 thinking grabbed hold of him and did not doubt the diagnosis. A pathologist later told him the surgeon was wrong. The best of us, under pressure, fall back on habitual modes of thinking. This is another reason why groups are useful (assuming all their members do not also panic). Organizations like the GJP and the Millennium Project are showing how well collective intelligence systems can perform. Helmuth von Moltke and Mike Tyson aside, a better motto, substantiated by a growing body of evidence, comes from Dwight  Eisenhower: “plans are useless, but planning is indispensable.”

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.