Technology as the Solution to Existential Risk
What is the relationship between technology and existential risk? Technology does not cause existential risk, but rather is the only effective means for countering it.
I do not deny that existential risks are real – but I find that most existential risks exist currently (e.g., risks from asteroid impacts, a new ice age, pandemics, or nuclear war) and that technological progress is the way to remove many of those risks without introducing others that are as great or greater. My view is that the existential risks from emerging technologies are quite minor (if at all significant) compared to the tremendous benefits such technologies would have in solving the existential risks we currently face (including the biggest risk to our own individual existences – our own mortality from senescence).
My essay “The Real War – and Why Inter-Human Wars Are a Distraction” describes my views on this matter in greater depth.
In short, I am a techno-optimist, one who considers it imperative to restore the Victorian-era ideal of Progress as a guiding principle in contemporary societies. The problem, as I see it, is not in the technologies of the future, but in the barbarous and primitive condition of the world as it exists today, with its many immediate perils.
As a libertarian, I believe that the entrepreneurship and innovation in even semi-free markets can address existential risks far more effectively than any national government – and bureaucratic management of these efforts would only hamper progress while incurring the risk of subverting the endeavors for nefarious objectives. (The National Security Agency’s recent attempt at a total surveillance state is a case in point.)
But fears of technology are our greatest existential risk. They have a real potential of halting progress in many fruitful areas – either through restrictive legislation or through the actions of a few Luddite fanatics who take it upon themselves to “right” the wrongs they perceive in a world of advancing technology. I can point to examples of such fanatics already exploiting fears of technologies that are not even close to existing yet. For instance, in a post on the LessWrong blog, one “dripgrind” – a sincere and therefore genuinely frightening fanatic – explicitly advocates assassination of AI researchers and chastises the Singularity Institute for Artificial Intelligence for not engaging in such a despicable tactic. This is the consequence of spreading fears about AI technology rather than simply and calmly developing such technology in a rational manner, so as to be incapable of harming humans. Many among the uneducated and superstitious are already on edge about emerging technologies. A strong message of vibrant optimism and reassurance is needed to prevent these people from lashing out and undermining the progress of our civilization in the process. The Frankenstein syndrome should be resisted no matter in what guise it appears.