Elon Musk seems to be on board with the argument that, as a news headline sums up, “Humans must merge with machines or become irrelevant in AI age.” The PayPal co-founder and SpaceX and Tesla Motors innovator has, in the past, expressed concern about deep AI. He even had a cameo in Transcendence, a Johnny Depp film that was a cautionary tale about humans becoming machines.
Has Musk changed his views? What should we think?
Musk said in a speech this week at the opening of Tesla in Dubai warned governments to “Make sure researchers don’t get carried away — scientists get so engrossed in their work they don’t realize what they are doing. But he also said that “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.” In techno-speak he told listeners that “Some high-bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence.” Imagine calculating a rocket trajectory by just thinking about it since your brain and the Artificial Intelligence with which it links are one!
This is, of course, the vision that is the goal of Ray Kurzweil and Peter Diamandis, co-founders of Singularity University. It is the Transhumanist vision of philosopher Max More. It is a vision of exponential technologies that could even help us live forever.
But in the past, Musk has expressed doubts about AI. In July 2015, he signed onto “Autonomous Weapons: an Open Letter from AI & Robotics Researchers,” which warned that such devices could “select and engage targets without human intervention.” Yes, out-of-control killer robots! But it concluded that “We believe that AI has great potential to benefit humanity in many ways … Starting a military AI arms race is a bad idea…” The letter was also signed by Diamandis, one of the foremost AI proponents. So it’s fair to say that Musk was simply offering reasonable caution.
In Werner Herzog’s documentary Lo and Behold: Reveries of a Connected World, Musk explained that “I think that the biggest risk is not that the AI will develop a will of its own but rather that it will follow the will of people that establish its utility function.” He offered, “If you were a hedge fund or private equity fund and you said, ‘Well, all I want my AI to do is maximize the value of my portfolio,’ then the AI could decide … to short consumer stocks, go long defense stocks, and start a war.” We wonder if the AI would appreciate that in the long-run, cities in ruins from war would harm the portfolio? In any case, Musk again seems to offer reasonable caution rather than blanket denunciations.
But in his Dubai remarks, he still seemed reticent. Should he and we be worried?
Why move ahead with AI?
Exponential technologies already have revolutionized communications and information and are doing the same to our biology. In the short-term, human-AI interfaces, genetic engineering, and nanotech all promise to enhance our human capacities, to make us smarter, quicker of mind, healthier, and long-lived.
In the long-term Diamandis contends that “Enabled with [brain-computer interfaces] and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.”
What does this mean? If we are truly Transhuman, will we be soulless Star Trek Borgs rather than Datas seeking a better human soul? There has been much deep thinking about such question but I don’t know and neither does anyone else.
In the 1937 Ayn Rand short novel Anthem, we see an impoverished dystopia governed by a totalitarian elites. We read that “It took fifty years to secure the approval of all the Councils for the Candle, and to decide on the number needed.”
Many elites today are in the throes of the “precautionary principle.” It holds that if an action or policy has a suspected risk of causing harm … the burden of proof that it is not harmful falls on those proposing the action or policy. Under this “don’t do anything for the first time” illogic, humans would never have used fire, much less candles.
By contrast, Max More offers the “proactionary principle.” It holds that we should assess risks according to available science, not popular perception, account for both risks the costs of opportunities foregone, and protect people’s freedom to experiment, innovate, and progress.
Diamandis, More and, let’s hope, Musk are the same path to a future we can’t predict but which we know can be beyond our most optimistic dreams. And you should be on that path too!
Edward Hudgins, “Public Opposition to Biotech Endangers Your Life and Health“. July 28, 2016.
Edward Hudgins, “The Robots of Labor Day“. September 2, 2015.
Edward Hudgins, “Google, Entrepreneurs, and Living 500 Years“. March 12, 2015.