I am pleased to announce that my short story “What Did Not Have to Be” won first place in the Transhumanity.net 2033 Immortality Fiction Contest. What a fitting outcome at a time when I am focusing more on my writing! The prize for first place is $50.
Here is the Transhumanity.net posting of winners. Out of all the contestants, I portray indefinite human longevity in the most optimistic light – and it would indeed be wonderful if technological progress can get us to this most vital of all goals within the next 20 years!
My new short science-fiction story, “What Did Not Have to Be”, has been published an entry in Transhumanity.net’s 2033 Immortality Fiction Contest. The short story focuses on indefinite life extension and those who resist it. I invite you to read it and offer your thoughts.
Winners of the contest will be announced on January 10, 2013. I applaud Transhumanity.net’s efforts to promote the cause of indefinite life extension by encouraging the writing of fiction on the topic.
G. Stolyarov II
April 2, 2012
What is the relationship between technology and existential risk? Technology does not cause existential risk, but rather is the only effective means for countering it.
I do not deny that existential risks are real – but I find that most existential risks exist currently (e.g., risks from asteroid impacts, a new ice age, pandemics, or nuclear war) and that technological progress is the way to remove many of those risks without introducing others that are as great or greater. My view is that the existential risks from emerging technologies are quite minor (if at all significant) compared to the tremendous benefits such technologies would have in solving the existential risks we currently face (including the biggest risk to our own individual existences – our own mortality from senescence).
My essay “The Real War – and Why Inter-Human Wars Are a Distraction” describes my views on this matter in greater depth.
In short, I am a techno-optimist, one who considers it imperative to restore the Victorian-era ideal of Progress as a guiding principle in contemporary societies. The problem, as I see it, is not in the technologies of the future, but in the barbarous and primitive condition of the world as it exists today, with its many immediate perils.
As a libertarian, I believe that the entrepreneurship and innovation in even semi-free markets can address existential risks far more effectively than any national government – and bureaucratic management of these efforts would only hamper progress while incurring the risk of subverting the endeavors for nefarious objectives. (The National Security Agency’s recent attempt at a total surveillance state is a case in point.)
But fears of technology are our greatest existential risk. They have a real potential of halting progress in many fruitful areas – either through restrictive legislation or through the actions of a few Luddite fanatics who take it upon themselves to “right” the wrongs they perceive in a world of advancing technology. I can point to examples of such fanatics already exploiting fears of technologies that are not even close to existing yet. For instance, in a post on the LessWrong blog, one “dripgrind” – a sincere and therefore genuinely frightening fanatic – explicitly advocates assassination of AI researchers and chastises the Singularity Institute for Artificial Intelligence for not engaging in such a despicable tactic. This is the consequence of spreading fears about AI technology rather than simply and calmly developing such technology in a rational manner, so as to be incapable of harming humans. Many among the uneducated and superstitious are already on edge about emerging technologies. A strong message of vibrant optimism and reassurance is needed to prevent these people from lashing out and undermining the progress of our civilization in the process. The Frankenstein syndrome should be resisted no matter in what guise it appears.