Browsed by
Tag: Ray Kurzweil

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence – Essay by Gennady Stolyarov II in Issue 2 of the INSAM Journal

******************************

Note from Mr. Stolyarov: For those interested in my thoughts on the connections among music, technology, algorithms, artificial intelligence, transhumanism, and the philosophical motivations behind my own compositions, I have had a peer-reviewed paper, “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” published in Issue 2 of the INSAM Journal of Contemporary Music, Art, and Technology. This is a rigorous academic publication but also freely available and sharable via a Creative Commons Attribution Share-Alike license – just as academic works ought to be – so I was honored by the opportunity to contribute my writing. My essay features discussions of Plato and Aristotle, Kirnberger’s and Mozart’s musical dice games, the AI-generated compositions of Ray Kurzweil and David Cope, and the recently completed “Unfinished” Symphony of Franz Schubert, whose second half was made possible by the Huawei / Lucas Cantor, AI / human collaboration. Even Conlon Nancarrow, John Cage, Iannis Xenakis, and Karlheinz Stockhausen make appearances in this paper. Look in the bibliography for YouTube and downloadable MP3 links to all of my compositions that I discuss, as this paper is intended to be a multimedia experience.

Music, technology, and transhumanism – all in close proximity in the same paper and pointing the way toward the vast proliferation of creative possibilities in the future as the distance between the creator’s conception of a musical idea and its implementation becomes ever shorter.

You can find my paper on pages 81-99 of Issue 2.

Read “Empowering Human Musical Creation through Machines, Algorithms, and Artificial Intelligence” here.

Read the full Issue 2 of the INSAM Journal here.

Abstract: “In this paper, I describe the development of my personal research on music that transcends the limitations of human ability. I begin with an exploration of my early thoughts regarding the meaning behind the creation of a musical composition according to the creator’s intentions and how to philosophically conceptualize the creation of such music if one rejects the existence of abstract Platonic Forms. I then explore the transformation of my own creative process through the introduction of software capable of playing back music in exact accord with the inputs provided to it, while enabling the creation of music that remains intriguing to the human ear even though the performance of it may sometimes be beyond the ability of humans. Subsequently, I describe my forays into music generated by earlier algorithmic systems such as the Musikalisches Würfelspiel and narrow artificial-intelligence programs such as WolframTones and my development of variations upon artificially generated themes in essential collaboration with the systems that created them. I also discuss some of the high-profile, advanced examples of AI-human collaboration in musical creation during the contemporary era and raise possibilities for the continued role of humans in drawing out and integrating the best artificially generated musical ideas. I express the hope that the continued advancement of musical software, algorithms, and AI will amplify human creativity by narrowing and ultimately eliminating the gap between the creator’s conception of a musical idea and its practical implementation.”

Cyborg and Transhumanist Forum at the Nevada State Legislature – May 15, 2019

Cyborg and Transhumanist Forum at the Nevada State Legislature – May 15, 2019

Gennady Stolyarov II
Anastasia Synn
R. Nicholas Starr


Watch the video containing 73 minutes of excerpts from the Cyborg and Transhumanist Forum, held on May 15, 2019, at the Nevada State Legislature Building.

The Cyborg and Transhumanist Forum at the Nevada Legislature on May 15, 2019, marked a milestone for the U.S. Transhumanist Party and the Nevada Transhumanist Party. This was the first time that an official transhumanist event was held within the halls of a State Legislature, in one of the busiest areas of the building, within sight of the rooms where legislative committees met. The presenters were approached by tens of individuals – a few legislators and many lobbyists and staff members. The reaction was predominantly either positive or at least curious; there was no hostility and only mild disagreement from a few individuals. Generally, the outlook within the Legislative Building seems to be in favor of individual autonomy to pursue truly voluntary microchip implants. The testimony of Anastasia Synn at the Senate Judiciary Committee on April 26, 2019, in opposition to Assembly Bill 226, is one of the most memorable episodes of the 2019 Legislative Session for many who heard it. It has certainly affected the outcome for Assembly Bill 226, which was subsequently further amended to restore the original scope of the bill and only apply the prohibition to coercive microchip implants, while specifically exempting microchip implants voluntarily received by an individual from the prohibition. The scope of the prohibition was also narrowed by removing the reference to “any other person” and applying the prohibition to an enumerated list of entities who may not require others to be microchipped: state officers and employees, employers as a condition of employment, and persons in the business of insurance or bail. These changes alleviated the vast majority of the concerns within the transhumanist and cyborg communities about Assembly Bill 226.

From left to right: Gennady Stolyarov II, Anastasia Synn, and Ryan Starr (R. Nicholas Starr)

This Cyborg and Transhumanist Forum comes at the beginning of an era of transhumanist political engagement with policymakers and those who advise them. It was widely accepted by the visitors to the demonstration tables that technological advances are accelerating, and that policy decisions regarding technology should only be made with adequate knowledge about the technology itself – working on the basis of facts and not fears or misconceptions that arise from popular culture and dystopian fiction. Ryan Starr shared his expertise on the workings and limitations of both NFC/RFID microchips and GPS technology and who explained that cell phones are already far more trackable than microchips ever could be (based on their technical specifications and how those specifications could potentially be improved in the future). U.S. Transhumanist Party Chairman Gennady Stolyarov II introduced visitors to the world of transhumanist literature by bringing books for display – including writings by Aubrey de Grey, Bill Andrews, Ray Kurzweil, Jose Cordeiro, Ben Goertzel, Phil Bowermaster, and Mr. Stolyarov’s own book “Death is Wrong” in five languages. It appears that there is more sympathy for transhumanism within contemporary political circles than might appear at first glance; it is often transhumanists themselves who overestimate the negativity of the reaction they expect to receive. But nobody picketed the event or even called the presenters names; transhumanist ideas, expressed in a civil and engaging way – with an emphasis on practical applications that are here today or due to arrive in the near future – will be taken seriously when there is an opening to articulate them.

The graphics for the Cyborg and Transhumanist Forum were created by Tom Ross, the U.S. Transhumanist Party Director of Media Production.

Become a member of the U.S. Transhumanist Party / Transhuman Party free of charge, no matter where you reside.

References

• Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

• “A Word on Implanted NFC Tags” – Article by Ryan Starr

• Assembly Bill 226, Second Reprint – This is the version of the bill that passed the Senate on May 23, 2019.

• Amendment to Assembly Bill 226 to essentially remove the prohibition against voluntary microchip implants

• Future Grind Podcast

• Synnister – Website of Anastasia Synn

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

G. Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Bobby Ridge
Gennady Stolyarov II


Bobby Ridge, Secretary-Treasurer of the U.S. Transhumanist Party, and Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, provide a broad “big-picture” overview of transhumanism and major ongoing and future developments in emerging technologies that present the potential to revolutionize the human condition and resolve the age-old perils and limitations that have plagued humankind.

This is a beginners’ overview of transhumanism – which means that it is for everyone, including those who are new to transhumanism and the life-extension movement, as well as those who have been involved in it for many years – since, when it comes to dramatically expanding human longevity and potential, we are all beginners at the beginning of what could be our species’ next great era.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside.

See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.

In the background of some of the video segments is a painting now owned by Mr. Stolyarov, from “The Singularity is Here” series by artist Leah Montalto.

Discussion on Life-Extension Advocacy – G. Stolyarov II Answers Audience Questions

Discussion on Life-Extension Advocacy – G. Stolyarov II Answers Audience Questions

The New Renaissance Hat

G. Stolyarov II

******************************

Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, answers audience questions regarding life-extension advocacy and possibilities for broadening the reach of transhumanist and life-extensionist ideas.

While we were unable to get into contact with our intended guest, Chris Monteiro, we were nonetheless able to have a productive, wide-ranging discussion that addressed many areas of emerging technologies, as well as trends in societal attitudes towards them and related issues of cosmopolitanism, ideology, and the need for a new comprehensive philosophical paradigm of transmodernism or hypermodernism that would build off of the legacy of the 18th-century Age of Enlightenment.

Become a member of the U.S. Transhumanist Party for free. Apply here.

Are We Entering The Age of Exponential Growth? – Article by Marian L. Tupy

Are We Entering The Age of Exponential Growth? – Article by Marian L. Tupy

The New Renaissance HatMarian L. Tupy
******************************

In his 1999 book The Age of Spiritual Machines, the famed futurist Ray Kurzweil proposed “The Law of Accelerating Returns.” According to Kurzweil’s law, “the rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially.” I mention Kurzweil’s observation, because it is sure beginning to feel like we are entering an age of colossal and rapid change. Consider the following:

According to The Telegraph, “Genes which make people intelligent have been discovered [by researchers at the Imperial College London] and scientists believe they could be manipulated to boost brain power.” This could usher in an era of super-smart humans and accelerate the already fast process of scientific discovery.

Elon Musk’s SpaceX Falcon 9 rocket has successfully “blasted off from Cape Canaveral, delivered communications satellites to orbit before its main-stage booster returned to a landing pad.” Put differently, space flight has just become much cheaper since main-stage booster rockets, which were previously non-reusable, are also very expensive.

The CEO of Merck has announced a major breakthrough in the fight against lung cancer. Keytruda “is a new category of drugs that stimulates the body’s immune system.” “Using Keytruda,” Kenneth Frazier said, “will extend [the life of lung cancer sufferers] … by approximately 13 months on average. We know that it will reduce the risk of death by 30-40 percent for people who had failed on standard chemo-therapy.”

Also, there has been massive progress in the development of “edible electronics.” New technology developed by Bristol Robotics Laboratory “will allow the doctor to feel inside your body without making a single incision, effectively taking the tips of the doctor’s fingers and transplant them onto the exterior of the [edible] robotic pill. When the robot presses against the interior of the intestinal tract, the doctor will feel the sensation as if her own fingers were pressing the flesh.”

Marian L. Tupy is the editor of HumanProgress.org and a senior policy analyst at the Center for Global Liberty and Prosperity. He specializes in globalization and global wellbeing, and the political economy of Europe and sub-Saharan Africa. His articles have been published in the Financial Times, Washington Post, Los Angeles Times, Wall Street Journal, U.S. News and World Report, The Atlantic, Newsweek, The U.K. Spectator, Weekly Standard, Foreign Policy, Reason magazine, and various other outlets both in the United States and overseas. Tupy has appeared on The NewsHour with Jim Lehrer, CNN International, BBC World, CNBC, MSNBC, Al Jazeera, and other channels. He has worked on the Council on Foreign Relations’ Commission on Angola, testified before the U.S. Congress on the economic situation in Zimbabwe, and briefed the Central Intelligence Agency and the State Department on political developments in Central Europe. Tupy received his B.A. in international relations and classics from the University of the Witwatersrand in Johannesburg, South Africa, and his Ph.D. in international relations from the University of St. Andrews in Great Britain.

This work by Cato Institute is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

How Anti-Individualist Fallacies Prevent Us from Curing Death – Article by Edward Hudgins

How Anti-Individualist Fallacies Prevent Us from Curing Death – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
July 3, 2015
******************************

Are you excited about Silicon Valley entrepreneurs investing billions of dollars to extend life and even “cure” death?

It’s amazing that such technologically challenging goals have gone from sci-fi fantasies to fantastic possibilities. But the biggest obstacles to life extension could be cultural: the anti-individualist fallacies arrayed against this goal.

Entrepreneurs defy death

 A recent Washington Post feature documents the “Tech titans’ latest project: Defy death. “ Peter Thiel, PayPal co-founder and venture capitalist, has led the way, raising awareness and funding regenerative medicines. He explains: “I’ve always had this really strong sense that death was a terrible, terrible thing… Most people end up compartmentalizing and they are in some weird mode of denial and acceptance about death, but they both have the result of making you very passive. I prefer to fight it.”

Others prefer to fight as well. Google CEO Larry Page created Calico to invest in start-ups working to stop aging. Oracle’s Larry Ellison has also provided major money for anti-aging research. Google’s Sergey Brin and Facebook’s Mark Zuckerberg both have funded the Breakthrough Prize in Life Sciences Foundation.

Beyond the Post piece we can applaud the education in the exponential technologies needed to reach these goals by Singularity U., co-founded by futurist Ray Kurzweil, who believes humans and machines will merge in the decades to become transhumans, and X-Prize founder Peter Diamandis.

The Post piece points out that while in the past two-thirds of science and medical research was funded by the federal government, today private parties put up two-thirds. These benefactors bring their entrepreneurial talents to their philanthropic efforts. They are restless for results and not satisfied with the slow pace of government bureaucracies plagued by red tape and politics.

“Wonderful!” you’re thinking. “Who could object?”

Laurie Zoloth’s inequality fallacy

 Laurie Zoloth for one. This Northwestern University bioethicist argues that “Making scientific progress faster doesn’t necessarily mean better — unless if you’re an aging philanthropist and want an answer in your lifetime.” The Post quotes her further as saying that “Science is about an arc of knowledge, and it can take a long time to play out.”

Understanding the world through science is a never-ending enterprise. But in this case, science is also about billionaires wanting answers in their lifetimes because they value their own lives foremost and they do not want them to end. And the problem is?

Zoloth grants that it is ”wonderful to be part of a species that dreams in a big way” but she also wants “to be part of a species that takes care of the poor and the dying.” Wouldn’t delaying or even eliminating dying be even better?

The discoveries these billionaires facilitate will help millions of people in the long-run. But her objection seems rooted in a morally-distorted affinity for equality of condition: the feeling that it is wrong for some folks to have more than others—never mind that they earned it—in this case early access to life-extending technologies. She seems to feel that it is wrong for these billionaires to put their own lives, loves, dreams, and well-being first.

We’ve heard this “equality” nonsense for every technological advance: only elites will have electricity, telephones, radios, TVs, computers, the internet, smartphones, whatever. Yes, there are first adopters, those who can afford new things. Without them footing the bills early on, new technologies would never become widespread and affordable. This point should be blindingly obvious today, since the spread of new technologies in recent decades has accelerated. But in any case, the moral essential is that it is right for individuals to seek the best for themselves while respecting their neighbors’ liberty to do the same.

Leon Kass’s “long life is meaningless” fallacy

 The Post piece attributes to political theorist Francis Fukuyama the belief that “a large increase in human life spans would take away people’s motivation for the adaptation necessary for survival. In that kind of world, social change comes to a standstill.”

Nonsense! As average lifespans doubled in past centuries, social change—mostly for the better—accelerated. Increased lifespans in the future could allow individuals to take on projects spanning centuries rather than decades. Indeed, all who love their lives regret that they won’t live to see, experience, and help create the wonders of tomorrow.

The Post cites physician and ethicist Leon Kass who asks: “Could life be serious or meaningful without the limit of mortality?”

Is Kass so limited in imagination or ignorant of our world that he doesn’t appreciate the great, long-term projects that could engage us as individuals seriously and meaningfully for centuries to come? (I personally would love to have the centuries needed to work on terraforming Mars, making it a new habitat for humanity!)

Fukuyama and Kass have missed the profound human truth that we each as individuals create the meaning for our own lives, whether we live 50 years or 500. Meaning and purpose are what only we can give ourselves as we pursue productive achievements that call upon the best within us.

Francis Fukuyama’s anti-individualist fallacy

 The Post piece quotes Fukuyama as saying “I think that research into life extension is going to end up being a big social disaster… Extending the average human life span is a great example of something that is individually desirable by almost everyone but collectively not a good thing. For evolutionary reasons, there is a good reason why we die when we do.”

What a morally twisted reason for opposing life extension! Millions of individuals should literally damn themselves to death in the name of society. Then count me anti-social.

Some might take from Fukuyama’s premise a concern that millions of individuals living to 150 will spend half that time bedridden, vegetating, consuming resources, and not producing. But the life extension goal is to live long with our capacities intact—or enhanced! We want 140 to be the new 40!

What could be good evolutionary reasons why we die when we do? Evolution only metaphorically has “reasons.” It is a biological process that blindly adapted us to survive and reproduce: it didn’t render us immune to ailments. Because life is the ultimate value, curing those ailments rather than passively suffering them is the goal of medicine. Life extension simply takes the maintenance of human life a giant leap further.

Live long and prosper

 Yes, there will be serious ethical questions to face as the research sponsored by benevolent billionaires bears fruit. But individuals who want to live really long and prosper in a world of fellow achievers need to promote human life as the ultimate value and the right of all individuals to live their own lives and pursue their own happiness as the ultimate liberty.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

“Ex Machina” Movie Review – Article by Edward Hudgins

“Ex Machina” Movie Review – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
July 3, 2015
******************************
ex-machina-review-objectivism

How will we know if an artificial intelligence actually attains a human level of consciousness?

As work in robotics and merging man and machine accelerates, we can expect more movies on this theme. Some, like Transcendence, will be dystopian warnings of potential dangers. Others, like Ex Machina, elicit serious thought about what it is to be human. Combining a good story and good acting, Ex Machina should interest technophiles and humanists alike.

The Turing Test

The film opens on Caleb Smith (Domhnall Gleeson) , a 27-year-old programmer at uber-search engine company Blue Book, who wins a lottery to spend a week at the isolated mountain home of the company’s reclusive genius creator, Nathan Bateman (Oscar Isaac). But the hard-drinking, eccentric Nathan tells Caleb that they’re not only going to hang out and get drunk.

He has created an android AI named Ava (Alicia Vikander) with a mostly woman-like, but part robot-like, appearance. The woman part is quite attractive. Nathan wants Caleb to spend the week administering the Turing Test to determine whether the AI shows intelligent behavior indistinguishable from that of a human. Normally this test is administered so the tester cannot see whether he’s dealing with a human and or machine. The test consists of exchanges of questions and answers, and is usually done in some written form. Since Caleb already knows Ava is an AI, he really needs to be convinced in his daily sessions with her, reviewed each evening with Nathan, that Nathan has created, in essence, a sentient, self-conscious human. It’s a high bar.

Android sexual attraction

Ava is kept locked in a room where her behavior can be monitored 24/7. Caleb talks to her through a glass, and at first he asks standard questions any good techie would ask to determine if she is human or machine. But soon Ava is showing a clear attraction to Caleb. The feeling is mutual.

In another session Ava is turning the tables. She wants to know about Caleb and be his friend. But during one of the temporary power outages that seems to plague Nathan’s house, when the monitoring devices are off, Ava tells Caleb that Nathan is not his friend and not to trust him. When the power comes back on, Ava reverts to chatting about getting to know Caleb.

In another session, when Ava reveals she’s never allowed out of the room, Caleb asks where she would choose to go if she could leave. She says to a busy traffic intersection. To people watch! Curiosity about humanity!

Ava then asks Caleb to close his eyes and she puts on a dress and wig to cover her robot parts. She looks fully human. She says she’d wear this if they went on a date. Nathan later explains that he gave Ava gender since no human is without one. That is part of human consciousness. Nathan also explains that he did not program her specifically to like Caleb. And he explains that she is fully sexually functional.

A human form of awareness

In another session Caleb tells Ava what she certainly suspects, that he is testing her. To communicate what he’s looking for, he offers the “Mary in a Black and White Room” thought experiment. Mary has always lived in a room with no colors. All views of the outside world are through black and white monitors. But she understands everything about the physics of color and about how the human eyes and brain process color. But does she really “know” or “understand” color—the “qualia”—until she walks outside and actually sees the blue sky?

Is Ava’s imitation of the human level of consciousness or awareness analogous to Mary’s consciousness or awareness of color when in the black and white room, purely theoretical? Is Ava simply a machine, a non-conscious automaton running a program by which she mimics human emotions and traits?

Ava is concerned with what will happen if she does not pass the Turing test. Nathan later tells Caleb that he thinks the AI after Ava will be the one he’s aiming for. And what will happen to Ava? The program will be downloaded and the memories erased. Caleb understands that this means Ava’s death.

Who’s testing whom?

During a blackout, this one of Nathan in a drunken stupor, Caleb borrows Nathan’s passcard to access closed rooms, and he discovers some disturbing truths about what proceeded Ava and led to her creation.

In the next session, during a power outage, Ava and Caleb plan an escape from the facility. They plan to get Nathan drunk, change the lock codes on the doors, and get out at the next power outage.

But has Nathan caught on? On the day Caleb is scheduled to leave he tells Nathan that Ava has passed the Turing Test. But Nathan asks whether Caleb thinks Ava is just pretending to like Caleb in order to escape. If so, this would show human intelligence and would mean that Ava indeed has passed the test.

But who is testing and manipulating whom and to what end? The story takes a dramatic, shocking turn as the audience finds out who sees through whose lies and deceptions. Does Mary ever escape from the black and white room? Is Ava really conscious like a human?

What it means to be human

In this fascinating film, writer/director Alex Garland explores what it is to be human in terms of basic drives and desires. There is the desire to know, understand, and experience. There is the desire to love and be loved. There is the desire to be free to choose. And there is the love of life.

But to be human is also to be aware that others might block one from pursuing human goals, that others can be cruel, and they can lie and deceive. There is the recognition that one might need to use the same behavior in order to be human.

If thinkers like Singularity theorist Ray Kurzweil are right, AIs might be passing the Turing Test within a few decades. But even if they don’t, humans will more and more rely on technologies that could enhance our minds and capacities and extend our lives. As we do so, it will be even more important that we keep in mind what it is to be human and what is best about being human. Ex Machina will not only provide you with an entertaining evening at the movies; it will also help you use that very human capacity, the imagination, to prepare your mind to meet these challenges.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

Google, Entrepreneurs, and Living 500 Years – Article by Edward Hudgins

Google, Entrepreneurs, and Living 500 Years – Article by Edward Hudgins

The New Renaissance Hat
Edward Hudgins
March 29, 2015
******************************

“Is it possible to live to be 500?”

“Yes,” answers Bill Maris of Google, without qualifications.

A Bloomberg Markets piece on “Google Ventures and the Search for Immortality” documents how the billions of dollars Maris invests each year is transforming life itself. But the piece also makes clear that the most valuable asset he possesses —and that, in others, makes those billions work—is entrepreneurship.

Google’s Bio-Frontiers

Maris, who heads a venture capital fund set up by Google, studied neuroscience in college. So perhaps it is no surprise that he has invested over one-third of the fund’s billions in health and life sciences. Maris has been influenced by futurist and serial inventor Ray Kurzweil who predicts that by 2045 humans and machines will merge, radically transforming and extending human life, perhaps indefinitely. Google has hired Kurzweil to carry on his work towards what he calls this “singularity.”

Maris was instrumental in creating Calico, a Google company that seeks nothing less than to cure aging, that is, to defeat death itself.  This and other companies in which Maris directs funds have specific projects to bring about this goal, from genetic research to analyzing cancer data.

Maris observes that “There are a lot of billionaires in Silicon Valley, but in the end, we are all heading for the same place. If given the choice between making a lot of money or finding a way to live longer, what do you choose?”

Google Ventures does not restrict its investments to life sciences. For example, it helped with the Uber car service and has put money into data management and home automation tech companies.

“Entrepreneuring” tomorrow

Perhaps the most important take-away from the Bloomberg article is the “why” behind Maris’s efforts. The piece states that “A company with $66 billion in annual revenue isn’t doing this for the money. What Google needs is entrepreneurs.” And that is what Maris and Google Ventures are looking for.

They seek innovators with new, transformative and, ultimately, profitable ideas and visions. Most important, they seek those who have the strategies and the individual qualities that will allow them to build their companies and make real their visions.

Entrepreneurial life

But entrepreneurship is not just a formula for successful start-ups. It is a concept that is crucial for the kind of future that Google and Maris want to bring about, beyond the crucial projects of any given entrepreneur.

Entrepreneurs love their work. They aim at productive achievement. They are individualists who act on the judgments of their own minds. And they take full responsibility for all aspects of their enterprises.

On this model, all individuals should treat their own lives as their own entrepreneurial opportunities. They should love their lives. They should aim at happiness and flourishing—their big profit!—through productive achievement. They should act on the judgments of their own minds. And they should take full responsibility for every aspect of their lives.

And this entrepreneurial morality must define the culture of America and the world if the future is to be the bright one at which Google and Maris aim. An enterprise worthy of a Google investment would seek to promote this morality throughout the culture. It would seek strategies to replace cynicism and a sense of personal impotence and social decline with optimism and a recognition of personal efficacy and the possibility of social progress.

So let’s be inspired by Google’s efforts to change the world, and let’s help promote the entrepreneurial morality that is necessary for bringing it about.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.