Browsed by
Category: Technology

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

G. Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Ideas for Technological Solutions to Destructive Climate Change – Article by G. Stolyarov II

Ideas for Technological Solutions to Destructive Climate Change – Article by G. Stolyarov II

G. Stolyarov II



Destructive climate change is no longer a hypothesis or mere possibility; rather, the empirical evidence for it has become apparent in the form of increasingly frequent extremes of temperature and natural disasters – particularly the ongoing global heat wave and major wildfires occurring in diverse parts of the world. In each individual incident, it is difficult to pinpoint “climate change” as a singular cause, but climate change can be said to exacerbate the frequency and severity of the catastrophes that arise. Residing in Northern Nevada for the past decade has provided me ample empirical evidence of the realities of deleterious climate change. Whereas there were no smoke inundations from California wildfires during the first four summers of my time in Northern Nevada, the next six consecutive summers (2013-2018) were all marked by widespread, persistent inflows of smoke from major wildfires hundreds of kilometers away, so as to render the air quality here unhealthy for long periods of time. From a purely probabilistic standpoint, the probability of this prolonged sequence of recent but consistently recurring smoke inundations would be minuscule in the absence of some significant climate change. Even in the presence of some continued debate over the nature and causes of climate change, the probabilities favor some action to mitigate the evident adverse effects and to rely on the best-available scientific understanding to do so, even with the allowance that the scientific understanding will evolve and hopefully become more refined over time – as good science does. Thus, it is most prudent to accept that there is deleterious climate change and that at least a significant contribution to it comes from emissions of certain gases, such as carbon dioxide and methane, into the atmosphere as a result of particular human activities, the foremost of which is the use of fossil fuels. This is not an indictment of human beings, nor even of fossil fuels per se, but rather an indication that the deleterious side effects of particular activities should be prevented or alleviated through further human activity and ingenuity.

Yet one of the major causes of historical reluctance among laypersons, especially in the United States, to accept the findings of the majority of climate scientists has been the misguided conflation by certain activists (almost always on the political Left) of the justifiable need to prevent or mitigate the effects of climate change with specific policy recommendations that are profoundly counterproductive to that purpose and would only increase the everyday suffering of ordinary people without genuinely alleviating deleterious climate change. The policy recommendations of this sort have historically fallen into two categories: (i) Neo-Malthusian, “back to nature” proposals to restrict the use of advanced technologies and return to more primitive modes of living; and (ii) elaborate economic manipulations, such as the creation of artificial markets in “carbon credits”, or the imposition of a carbon tax or a related form of “Pigovian tax” – ostensibly to associate the “negative externalities” of greenhouse-gas emissions with a tangible cost. The Neo-Malthusian “solutions” would, in part deliberately, cause extreme detriments to most people’s quality of life (for those who remain alive), while simultaneously resulting in the use of older, far more environmentally destructive techniques of energy generation, such as massive deforestation or the combustion of animal byproducts. The Neo-Pigovian economic manipulations ignore how human motives and incentives actually work and are far too indirect and contingent on a variety of assumptions that are virtually never likely to hold in practice. At the same time, the artificially complex structures that these economic manipulations inevitably create would pose obstructions to the direct deployment of more straightforward solutions by entangling such potential solutions in an inextricable web of compliance interdependencies.

The solutions to destructive climate change are ultimately technological and infrastructural.  No single device or tactic – and certainly no tax or prohibition – can comprehensively combat a problem of this magnitude and variety of impacts. However, a suite of technologically oriented approaches – pushing forward the deployment and quality of the arsenal of tools available to humankind – could indeed arrest and perhaps reverse the course of deleterious climate change by directly reducing the emissions of greenhouse gases and/or directly alleviating the consequences of increased climate variability.

Because both human circumstances and current as well as potential technologies are extremely diverse, no list of potential solutions to deleterious climate change can ever be exhaustive. Here I attempt the beginnings of such a list, but I invite others to contribute additional technologically oriented solutions as well. There are only two constraints on the kinds of solutions that can feasibly and ethically combat deleterious climate change – but those constraints are of immense importance:

Constraint 1. The solutions may not result in a net detriment to any individual human’s length or material quality of life.

Constraint 2. The solutions may not involve the prohibition of technologies or the restraint of further technological progress.

Constraint 1 implies that any solution to deleterious climate change will need to be a Pareto-efficient move, in that at least one person should benefit, while no person should suffer a detriment (or at least a detriment that has not been satisfactorily compensated for in that person’s judgment). Constraint 2 implies a techno-optimistic and technoprogressive perspective on combating deleterious climate change: we can do it without restrictions or prohibitions, but rather through innovations that will benefit all humans. Some technologies, particularly those associated with the extraction and use of fossil fuels, may gradually be consigned to obsolescence and irrelevance with this approach, but this will be due to their voluntary abandonment once superior, more advanced technological alternatives become widespread and economical to deploy. The more freedom to innovate and active acceleration of technological progress exist, the sooner that stage of fossil-fuel obsolescence could be reached. In the meantime, some damaging events are unfortunately unavoidable (as are many natural catastrophes more generally in our still insufficiently advanced era), but a variety of approaches can be deployed to at least prevent or reduce some damage that would otherwise arise.

If humanity solves the problems of deleterious climate change, it can only be with the mindset that solutions are indeed achievable, and they are achievable without compromising our progress or standards of living. We must be neither defeatists nor reactionaries, but rather should proactively accelerate the development of emerging technologies to meet this challenge by actualizing the tremendous creative potential our minds have to offer.

What follows is the initial list of potential solutions. Long may it grow.

Direct Technological Innovation

  • Continued development of economical solar and wind power that could compete with fossil fuels on the basis of cost alone.
  • Continued development of electric vehicles and increases in their range, as well as deployment of charging stations throughout all inhabited areas to enable recharging to become as easy as a refueling a gasoline-powered vehicle.
  • Development of in vitro (lab-grown) meat that is biologically identical to currently available meat but does not require actual animals to die. Eventually this could lead the commercial raising of cattle – which contribute significantly to methane emissions – to decline substantially.
  • Development of vertical farming to increase the amount of arable land indoors – rendering more food production largely unaffected by climate change.
  • Autonomous vehicles offered as services by transportation network companies – reducing the need for direct car ownership in urban areas.
  • Development and spread of pest-resistant, drought-resistant genetically modified crops that require less intensive cultivation techniques and less application of spray pesticides, and which can also flourish in less hospitable climates.
  • Construction of hyperloop transit networks among major cities, allowing rapid transit without the pollution generated by most automobile and air travel. Hyperloop networks would also allow for more rapid evacuation from a disaster area.
  • Construction of next-generation, meltdown-proof nuclear-power reactors, including those that utilize the thorium fuel cycle. It is already possible today for most of a country’s electricity to be provided through nuclear power, if only the fear of nuclear energy could be overcome. However, the best way to overcome the fear of nuclear energy is to deploy new technologies that eliminate the risk of meltdown. In addition to this, technologies should be developed to reprocess nuclear waste and to safely re-purpose dismantled nuclear weapons for civilian energy use.
  • Construction of smart infrastructure systems and devices that enable each building to use available energy with the maximum possible benefit and minimum possible waste, while also providing opportunities for the building to generate its own renewable energy whenever possible.
  • In the longer term, development of technologies to capture atmospheric carbon dioxide and export it via spaceships to the Moon and Mars, where it could be released as part of efforts to generate a greenhouse effect and begin terraforming these worlds.

Disaster Response

  • Fire cameras located at prominent vantage points in any area of high fire risk – perhaps linked to automatic alerts to nearby fire departments and sprinkler systems built into the landscape, which might be auto-activated if a sufficiently large fire is detected in the vicinity.
  • Major increases in recruitment of firefighters, with generous pay and strategic construction of outposts in wilderness areas. Broad, paved roads need to lead to the outposts, allowing for heavy equipment to reach the site of a wildfire easily.
  • Development of firefighting robots to accompany human firefighters. The robots would need to be constructed from fire-resistive materials and have means of transporting themselves over rugged terrain (e.g., tank treads).
  • Design and deployment of automated firefighting drones – large autonomous aircraft that could carry substantial amounts of water and/or fire-retardant sprays.

Disaster Prevention

  • Recruitment of large brush-clearing brigades to travel through heavily forested areas – particularly remote and seldom-accessed ones – and clear dead vegetation as well as other wildfire fuels. This work does not require significant training or expertise and so could offer an easy job opportunity for currently unemployed or underemployed individuals. In the event of shortages of human labor, brush-clearing robots could be designed and deployed. The robots could also have the built-in capability to reprocess dead vegetation into commercially usable goods – such as mulch or wood pellets. Think of encountering your friendly maintenance robot when hiking or running on a trail!
  • Proactive creation of fire breaks in wilderness areas – not “controlled burns” (which are, in practice, difficult to control) but rather controlled cuts of smaller, flammable brush to reduce the probability of fire spreading. Larger trees of historic significance should be spared, but with defensible space created around them.
  • Deployment of surveillance drones in forested areas, to detect behaviors such as vandalism or improper precautions around manmade fires – which are often the causes of large wildfires.
  • Construction of large levees throughout coastal regions – protecting lowland areas from flooding and achieving in the United States what has been achieved in the Netherlands over centuries on a smaller scale. Instead of building a wall at the land border, build many walls along the coasts!
  • Construction of vast desalination facilities along ocean coasts. These facilities would take in ocean water, thereby counteracting the effects of rising water levels, then purify the water and transmit it via a massive pipe network throughout the country, including to drought-prone regions. This would mitigating multiple problems, reducing the excess of water in the oceans while replenishing the deficit of water in inland areas.
  • Creation of countrywide irrigation and water-pipeline networks to spread available water and prevent drought wherever it might arise.

Economic Policies

  • Redesign of home insurance policies and disaster-mitigation/recovery grants to allow homeowners who lost their homes to natural disasters to rebuild in different, safer areas.
  • Development of workplace policies to encourage telecommuting and teleconferencing, including through immersive virtual-reality technologies that allow for plausible simulacra of in-person interaction. The majority of business interactions can be performed virtually, eliminating the need for much business-related commuting and travel.
  • Elimination of local and regional monopoly powers of utility companies in order to allow alternative-energy utilities, such as companies specializing in the installation of solar panels, to compete and offer their services to homeowners independently of traditional utilities.
  • Establishment of consumer agencies (public or private) that review products for durability and encourage the construction of devices that lack “planned obsolescence” but rather can be used for decades with largely similar effect.
  • Establishment of easily accessible community repair shops where old devices and household goods can be taken to be repaired or re-purposed instead of being discarded.
  • Abolition of inflexible zoning regulations and overly prescriptive building codes; replacement with a more flexible system that allows a wide variety of innovative construction techniques, including disaster-resistant and sustainable construction methods, tiny homes, homes created from re-purposed materials, and mixed-use residential/commercial developments (which also reduce the need for vehicular commuting).
  • Abolition of sales taxes on energy-efficient consumer goods.
  • Repeal or non-enactment of any mileage-based taxes for electric or hybrid vehicles, thereby resulting in such vehicles becoming incrementally less expensive to operate.
  • Lifting of all bans and restrictions on genetically modified plants and animals – which are a crucial component in adaptation to climate change and in reducing the carbon footprint of agricultural activities.

Harm Mitigation

  • Increases in planned urban vegetation through parks, rooftop gardens, trees planted alongside streets, pedestrian / bicyclist “greenways” lined with vegetation. The additional vegetation can absorb carbon dioxide, reducing the concentrations in the atmosphere.
  • Construction of additional pedestrian / bicyclist “greenways”, which could help reduce the need for vehicular commutes.
  • Construction of always-operational disaster shelters with abundant stockpiles of aid supplies, in order to prevent the delays in deployment of resources that occur during a disaster. When there is no disaster, the shelters could perform other valuable tasks that generally are not conducive to market solutions, such as litter cleanup in public spaces or even offering inexpensive meeting space to various individuals and organizations. (This could also contribute to the disaster shelters largely becoming self-funding in calm times.)
  • Provision of population-wide free courses on disaster preparation and mitigation. The courses could have significant online components as well as in-person components administered by first-aid and disaster-relief organizations.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Fourth Enlightenment Salon – Gennady Stolyarov II, Bill Andrews, Bobby Ridge, and John Murrieta Discuss Transhumanist Outreach and Curing Disabilities

Fourth Enlightenment Salon – Gennady Stolyarov II, Bill Andrews, Bobby Ridge, and John Murrieta Discuss Transhumanist Outreach and Curing Disabilities

Gennady Stolyarov II
Bill Andrews
Bobby Ridge
John Murrieta


On July 8, 2018, during his Fourth Enlightenment Salon, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, invited John Murrieta, Bobby Ridge, and Dr. Bill Andrews for an extensive discussion about transhumanist advocacy, science, health, politics, and related subjects. In this first of several installments from the Fourth Enlightenment Salon, the subjects of conversation include the following:

• The U.S. Transhumanist Party’s recent milestone of 1,000 members and what this portends for outreach toward the general public regarding the meaning of transhumanism and the many ways in which emerging technologies help make life better.

• The new channel – Science-Based Species – launched by Bobby and John to spread basic knowledge about transhumanism, key thinkers in the movement, and advances on the horizon.

• How today’s technologies to assist the disabled are already transhumanist in their effects, and how technologies already in development can liberate humans from disability altogether. John Murrieta’s story is one of transhumanism literally saving a life – and one of the most inspiring examples of how transhumanism translates into human well-being now and in the future.

Join the U.S. Transhumanist Party for free, no matter where you reside by filling out an application form that takes less than a minute. Members will also receive a link to a free compilation of Tips for Advancing a Brighter Future, providing insights from the U.S. Transhumanist Party’s Advisors and Officers on some of what you can do as an individual do to improve the world and bring it closer to the kind of future we wish to see.

Contra Robert Shiller on Cryptocurrencies – Article by Adam Alonzi

Contra Robert Shiller on Cryptocurrencies – Article by Adam Alonzi

Adam Alonzi


While warnings of caution can be condoned without much guilt, my concern is critiques like Dr. Shiller’s (which he has since considerably softened) will cause some value-oriented investors to completely exclude cryptocurrencies and related assets from their portfolios. I will not wax poetically about the myriad of forms money has assumed across the ages, because it is already well-covered by more than one rarely read treatise. It should be said, though it may not need to be, that a community’s preferred medium of exchange is not arbitrary. The immovable wheels of Micronesia met the needs of their makers just as digital stores of value like Bitcoin will serve the sprawling financial archipelagos of tomorrow. This role will be facilitated by the ability of blockchains not just to store transactions, but to enforce the governing charter agreed upon by their participants.

Tokens are abstractions, a convenient means of allotting ownership. Bradley Rivetz, a venture capitalist, puts it like this: “everything that can be tokenized will be tokenized the Empire State Building will someday be tokenized, I’ll buy 1% of the Empire State Building, I’ll get every day credited to my wallet 1% of the rents minus expenses, I can borrow against my Empire State Building holding and if I want to sell the Empire State Building I hit a button and I instantly have the money.” Bitcoin and its unmodified copycats do not derive their value from anything tangible. However, this is not the case for all crypto projects. Supporters tout its deflationary design (which isn’t much of an advantage when there is no value to deflate), its modest transaction fees, the fact it is not treated as a currency by most tax codes (this is changing and liable to continue changing), and the relative anonymity it offers.

The fact that Bitcoin is still considered an asset in most jurisdictions is a strength. This means that since Bitcoin is de facto intermediary on most exchanges (most pairs are expressed in terms of BTC or a major fiat, many solely in BTC), one can buy and sell other tokens freely without worrying about capital gains taxes, which turn what should be wholly pleasurable into something akin to an ice cream sundae followed by a root canal. This applies to sales and corporate income taxes as well. A company like Walmart, despite its gross income, relies on a slender profit margin to appease its shareholders. While I’m not asking you to weep for the Waltons, I am asking you to think about the incentives for a company to begin experimenting with its own tax-free tokens as a means of improving customer spending power and building brand loyalty.

How many coins will be needed and, for that matter, how many niches they will be summoned to fill, remains unknown.  In his lecture on real estate Dr. Shiller mentions the Peruvian economist Hernando De Soto’s observation about the lack of accounting for most of the land in the world.  Needless to say, for these areas to advance economically, or any way for that matter, it is important to establish who owns what. Drafting deeds, transferring ownership of properties or other goods, and managing the laws of districts where local authorities are unreliable or otherwise impotent are services that are best provided by an inviolable ledger. In the absence of a central body, this responsibility will be assumed by blockchain. Projects like BitNation are bringing the idea of decentralized governance to the masses; efforts like Octaneum are beginning to integrate blockchain technology with multi-trillion dollar commodities markets.

As more than one author has contended, information is arguably the most precious resource of the twenty first century. It it is hardly scarce, but analysis is as vital to making sound decisions. Augur and Gnosis provide decentralized prediction markets. The latter, Kristin Houser describes it, is a platform used “to create a prediction market for any event, such as the Super Bowl or an art auction.” Philip Tetlock’s book on superforecasting covers the key advantages of crowdsourcing economic and geopolitical forecasting, namely accuracy and cost-effectiveness. Blockchains will not only generate data, but also assist in making sense of it.  While it is just a historical aside, it is good to remember that money, as Tymoigne and Wray (2006) note, was originally devised as a means of recording debt. Hazel sticks with notches preceded the first coins by hundreds of years. Money began as a unit of accounting, not a store of value.

MelonPort and Iconomi both allow anyone to start their own investment funds. Given that it is “just” software is the beauty of it: these programs can continue to be improved upon  indefinitely. If the old team loses its vim, the project can easily be forked. Where is crypto right now and why does it matter? There is a tendency for academics (and ordinary people) to think of things in the real world as static objects existing in some kind of Platonic heaven. This is a monumental mistake when dealing with an adaptive system, or in this case, a series of immature, interlocking, and rapidly evolving ecosystems. We have seen the first bloom – some pruning too – and as clever people find new uses for the underlying technology, particularly in the area of IoT and other emerging fields, we will see another bloom. The crypto bubble has come and gone, but the tsunami, replete with mature products with explicit functions, is just starting to take shape.

In the long run Warren Buffett, Shiller, and the rest will likely be right about Bitcoin itself, which has far fewer features than more recent arrivals. Its persisting relevance comes from brand recognition and the fact that most of the crypto infrastructure was built with it in mind. As the first comer it will remain the reserve currency of the crypto world.  It is nowhere near reaching any sort of hard cap. The total amount invested in crypto is still minuscule compared to older markets. Newcomers, unaware or wary of even well-established projects like Ethereum and Litecoin, will at first invest in what they recognize. Given that the barriers to entry (access to an Internet connection and a halfway-decent computer or phone) are set to continue diminishing, including in countries in which the fiat currency is unstable, demand should only be expected to climb.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Bobby Ridge
Gennady Stolyarov II


Bobby Ridge, Secretary-Treasurer of the U.S. Transhumanist Party, and Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, provide a broad “big-picture” overview of transhumanism and major ongoing and future developments in emerging technologies that present the potential to revolutionize the human condition and resolve the age-old perils and limitations that have plagued humankind.

This is a beginners’ overview of transhumanism – which means that it is for everyone, including those who are new to transhumanism and the life-extension movement, as well as those who have been involved in it for many years – since, when it comes to dramatically expanding human longevity and potential, we are all beginners at the beginning of what could be our species’ next great era.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside.

See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.

In the background of some of the video segments is a painting now owned by Mr. Stolyarov, from “The Singularity is Here” series by artist Leah Montalto.

Second Enlightenment Salon – G. Stolyarov II, Bill Andrews, Bobby Ridge, and Scott Jurgens Discuss the Convergence of Technological Advances

Second Enlightenment Salon – G. Stolyarov II, Bill Andrews, Bobby Ridge, and Scott Jurgens Discuss the Convergence of Technological Advances

Gennady Stolyarov II

Bill Andrews

Bobby Ridge

Scott Jurgens


U.S. Transhumanist Party Chairman Gennady Stolyarov II invited Dr. Bill Andrews (the U.S. Transhumanist Party’s Biotechnology Advisor), Bobby Ridge (the U.S. Transhumanist Party’s Secretary-Treasurer), and Scott Jurgens to his Second Enlightenment Salon, where they shared their thoughts on emerging life-extension research, advances in prosthetics and orthotics, philosophy of science, brain-computer interfaces, and how technologies from a variety of fields are converging to bring about a paradigm shift in the human condition – hopefully within the coming decades.

U.S. Transhumanist Party Discussion on Prosthetics, Neuroscience, and the Future of Human Potential

U.S. Transhumanist Party Discussion on Prosthetics, Neuroscience, and the Future of Human Potential

The New Renaissance Hat

G. Stolyarov II

Bobby Ridge

Scott Jurgens

September 18, 2017


References

– Hugh Herr – “The new bionics that let us run, climb, and dance” – TED – March 2014
– LimbForge – Enable Community Foundation
– Autodesk Fusion 360
– Thingiverse
– “Metal Gear Solid 5 Inspires an Amazing Prosthetic Arm” – Kendall Ashley – Nerdist – May 23, 2016

Learn more about the U.S. Transhumanist Party here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form here.

Become a Foreign Ambassador for the U.S. Transhumanist Party. Apply here.

Why Robots Won’t Cause Mass Unemployment – Article by Jonathan Newman

Why Robots Won’t Cause Mass Unemployment – Article by Jonathan Newman

The New Renaissance Hat
Jonathan Newman
August 5, 2017
******************************

I made a small note in a previous article about how we shouldn’t worry about technology that displaces human workers:

The lamenters don’t seem to understand that increased productivity in one industry frees up resources and laborers for other industries, and, since increased productivity means increased real wages, demand for goods and services will increase as well. They seem to have a nonsensical apocalyptic view of a fully automated future with piles and piles of valuable goods everywhere, but nobody can enjoy them because nobody has a job. I invite the worriers to check out simple supply and demand analysis and Say’s Law.

Say’s Law of markets is a particularly potent antidote to worries about automation, displaced workers, and the so-called “economic singularity.” Jean-Baptiste Say explained how over-production is never a problem for a market economy. This is because all acts of production result in the producer having an increased ability to purchase other goods. In other words, supplying goods on the market allows you to demand goods on the market.

Say’s Law, Rightly Understood

J.B. Say’s Law is often inappropriately summarized as “supply creates its own demand,” a product of Keynes having “badly vulgarized and distorted the law.”

Professor Bylund has recently set the record straight regarding the various summaries and interpretations of Say’s Law.

Bylund lists the proper definitions:

Say’s Law:

  • Production precedes consumption.
  • Demand is constituted by supply.
  • One’s demand for products in the market is limited by one’s supply.
  • Production is undertaken to facilitate consumption.
  • Your supply to satisfy the wants of others makes up your demand for for others’ production.
  • There can be no general over-production (glut) in the market.

NOT Say’s Law:

  • Production creates its own demand.
  • Aggregate supply is (always) equal to aggregate demand.
  • The economy is always at full employment.
  • Production cannot exceed consumption for any good.

Say’s Law should allay the fears of robots taking everybody’s jobs. Producers will only employ more automated (read: capital-intensive) production techniques if such an arrangement is more productive and profitable than a more labor-intensive technique. As revealed by Say’s Law, this means that the more productive producers have an increased ability to purchase more goods on the market. There will never be “piles and piles of valuable goods” laying around with no one to enjoy them.

Will All the Income Slide to the Top?

The robophobic are also worried about income inequality — all the greedy capitalists will take advantage of the increased productivity of the automated techniques and fire all of their employees. Unemployment will rise as we run out of jobs for humans to do, they say.

This fear is unreasonable for three reasons. First of all, how could these greedy capitalists make all their money without a large mass of consumers to purchase their products? If the majority of people are without incomes because of automation, then the majority of people won’t be able to help line the pockets of the greedy capitalists.

Second, there will always be jobs because there will always be scarcity. Human wants are unlimited, diverse, and ever-changing, yet the resources we need to satisfy our desires are limited. The production of any good requires labor and entrepreneurship, so humans will never become unnecessary.

Finally, Say’s Law implies that the profitability of producing all other goods will increase after a technological advancement in the production of one good. Real wages can increase because the greedy robot-using capitalists now have increased demands for all other goods. I hope the following scenario makes this clear.

The Case of the Robot Fairy

This simple scenario shows why the increased productivity of a new, more capital-intensive technique makes everybody better off in the end.

Consider an island of three people: Joe, Mark, and Patrick. The three of them produce coconuts and berries. They prefer a varied diet, but they have their own comparative advantages and preferences over the two goods.

Patrick prefers a stable supply of coconuts and berries every week, and so he worked out a deal with Joe such that Joe would pay him a certain wage in coconuts and berries every week in exchange for Patrick helping Joe gather coconuts. If they have a productive week, Joe gets to keep the extra coconuts and perhaps trade some of the extra coconuts for berries with Mark. If they have a less than productive week, then Patrick still receives his certain wage and Joe has to suffer.

On average, Joe and Patrick produce 50 coconuts/week. In exchange for his labor, Patrick gets 10 coconuts and 5 quarts of berries every week from Joe.

Mark produces the berries on his own. He produces about 30 quarts of berries every week. Joe and Mark usually trade 20 coconuts for 15 quarts of berries. Joe needs some of those berries to pay Patrick, but some are for himself because he also likes to consume berries.

In sum, and for an average week, Joe and Patrick produce 50 coconuts and Mark produces 30 quarts of berries. Joe ends up with 20 coconuts and 10 quarts of berries, Patrick ends up with 10 coconuts and 5 quarts of berries, and Mark ends up with 20 coconuts and 15 quarts of berries.

Production Trade Consumption
Joe 50 Coconuts (C) Give 20C for 15B 20C + 10B
Patrick n/a 10C + 5B (wage)
Mark 30 qts. Berries (B) Give 15B for 20C 20C + 15B

The Robot Fairy Visits

One night, the robot fairy visits the island and endows Joe with a Patrick 9000, a robot that totally displaces Patrick from his job, plus some. With the robot, Joe can now produce 100 coconuts per week without the human Patrick.

What is Patrick to do? Well, he considers two options: (1) Now that the island has plenty of coconuts, he could go work for Mark and pick berries under a similar arrangement he had with Joe; or (2) Patrick could head to the beach and start catching some fish, hoping that Joe and Mark will trade with him.

While these options weren’t Patrick’s top choices before the robot fairy visited, now they are great options precisely because Joe’s productivity has increased. Joe’s increased productivity doesn’t just mean that he is richer in terms of coconuts, but his demands for berries and new goods like fish increase as well (Say’s Law), meaning the profitability of producing all other goods that Joe likes also increases!

Option 1

If Patrick chooses option 1 and goes to work for Mark, then both berry and coconut production totals will increase. Assuming berry production doesn’t increase as much as coconut production, the price of a coconut in terms of berries will decrease (Joe’s marginal utility for coconuts will also be very low), meaning Mark can purchase many more coconuts than before.

Suppose Patrick adds 15 quarts of berries per week to Mark’s production. Joe and Mark could agree to trade 40 coconuts for 20 quarts of berries, so Joe ends up with 60 coconuts and 20 quarts of berries. Mark can pay Patrick up to 19 coconuts and 9 quarts of berries and still be better off compared to before Joe got his Patrick 9000 (though Patrick’s marginal productivity would warrant something like 12 coconuts and 9 quarts of berries or 18 coconuts and 6 quarts of berries or some combination between those — no matter what, everybody is better off).

Production Trade Consumption
Joe 100C Give 40C for 20B 60C + 20B
Patrick 45B n/a 16C + 7B (wage)
Mark Give 20B for 40C 24C + 18B

Option 2

If Mark decides to reject Patrick’s offer to work for him, then Patrick can choose option 2, catching fish. It involves more uncertainty than what Patrick is used to, but he anticipates that the extra food will be worth it.

Suppose that Patrick can produce just 5 fish per week. Joe, who is practically swimming in coconuts pays Patrick 20 coconuts for 1 fish. Mark, who is excited about more diversity in his diet and even prefers fish to his own berries, pays Patrick 10 quarts of berries for 2 fish. Joe and Mark also trade some coconuts and berries.

In the end, Patrick gets 20 coconuts, 10 quarts of berries, and 2 fish per week. Joe gets 50 coconuts, 15 quarts of berries, and 1 fish per week. Mark gets 30 coconuts, 5 quarts of berries, and 2 fish per week. Everybody prefers their new diet.

Production Trade Consumption
Joe 100C Give 50C for 15B + 1F 50C + 15B + 1F
Patrick 5 fish (F) Give 2F for 20C + 10B 20C + 10B + 2F
Mark 30B Give 25B for 30C + 1F 30C + 5B + 2F

Conclusion

The new technology forced Patrick to find a new way to sustain himself. These new jobs were necessarily second-best (at most) to working for Joe in the pre-robot days, or else Patrick would have pursued them earlier. But just because they were suboptimal pre-robot does not mean that they are suboptimal post-robot. The island’s economy was dramatically changed by the robot, such that total production (and therefore consumption) could increase for everybody. Joe’s increased productivity translated into better deals for everybody.

Of course, one extremely unrealistic aspect of this robot fairy story is the robot fairy. Robot fairies do not exist, unfortunately. New technologies must be wrangled into existence by human labor and natural resources, with the help of capital goods, which also must be produced using labor and natural resources. Also, new machines have to be maintained, replaced, refueled, and rejiggered, all of which require human labor. Thus, we have made this scenario difficult for ourselves by assuming away all of the labor that would be required to produce and maintain the Patrick 9000. Even so, we see that the whole economy, including the human Patrick, benefits as a result of the new robot.

This scenario highlights three important points:

(1) Production must precede consumption, even for goods you don’t produce (Say’s Law). For Mark to consume coconuts or fish, he has to supply berries on the market. For Joe to consume berries or fish, he has to supply coconuts on the market. Patrick produced fish so that he could also enjoy coconuts and berries.

(2) Isolation wasn’t an option for Patrick. Because of the Law of Association (a topic not discussed here, but important nonetheless), there is always a way for Patrick to participate in a division of labor and benefit as a result, even after being displaced by the robot.

(3) Jobs will never run out because human wants will never run out. Even if our three island inhabitants had all of the coconuts and berries they could eat before the robot fairy visited, Patrick was able to supply additional want satisfaction with a brand new good, the fish. In the real world, new technologies often pave the way for brand new, totally unrelated goods to emerge and for whole economies to flourish. Hans Rosling famously made the case that the advent of the washing machine allowed women and their families to emerge from poverty:

And what’s the magic with them? My mother explained the magic with this machine the very, very first day. She said, “Now Hans, we have loaded the laundry. The machine will make the work. And now we can go to the library.” Because this is the magic: you load the laundry, and what do you get out of the machine? You get books out of the machines, children’s books. And mother got time to read for me. She loved this. I got the “ABC’s” — this is where I started my career as a professor, when my mother had time to read for me. And she also got books for herself. She managed to study English and learn that as a foreign language. And she read so many novels, so many different novels here. And we really, we really loved this machine.

And what we said, my mother and me, “Thank you industrialization. Thank you steel mill. Thank you power station. And thank you chemical processing industry that gave us time to read books.”

Similarly, the Patrick 9000, a coconut-producing robot, made fish production profitable. Indeed, when we look at the industrial revolution and the computer revolution, we do not just see an increase in the production of existing goods. We see existing goods increasing in quantity and quality; we see brand new consumption goods and totally new industries emerging, providing huge opportunities for employment and future advances in everybody’s standard of living.

Jonathan Newman is Assistant Professor of Economics and Finance at Bryan College. He earned his PhD at Auburn University and is a Mises Institute Fellow. He can be contacted here.

Panel – Artificial Intelligence & Robots: Economy of the Future or End of Free Markets? – Michael Shermer, Edward Hudgins, Zoltan Istvan, Gennady Stolyarov II, Eric Shuss

Panel – Artificial Intelligence & Robots: Economy of the Future or End of Free Markets? – Michael Shermer, Edward Hudgins, Zoltan Istvan, Gennady Stolyarov II, Eric Shuss

The New Renaissance Hat

G. Stolyarov II

Michael Shermer

Edward Hudgins

Zoltan Istvan

Eric Shuss

July 28, 2017


Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, participated in the panel discussion at FreedomFest in Las Vegas on July 21, 2017, entitled “AI & Robots: Economy of the Future or End of Free Markets?” The panelists presented a set of realistic, balanced analyses on the impact of artificial intelligence and automation.

***

For this event there was an outstanding speaker lineup, with moderator Michael Shermer, followed by Edward Hudgins, Peter Voss, Zoltan Istvan, Gennady Stolyarov II, and Eric Shuss.

***

The general focus of Mr. Stolyarov’s remarks was to dispel AI-oriented doomsaying and convey the likely survival of the capitalist economy for at least the forthcoming several decades – since narrow AI cannot automate away jobs requiring creative human judgment.

***

The video was recorded by filmmaker Ford Fischer and is reproduced with his permission.

Visit Ford Fischer’s News2Share channel here.

Visit the U.S. Transhumanist Party website here.

Join the U.S. Transhumanist Party for free by filling out our membership application form here.

Visit the U.S. Transhumanist Party Facebook page here.

Visit the U.S. Transhumanist Party Twitter page here.