Browsed by
Tag: I Robot

Conflicting Technological Premises in Isaac Asimov’s “Runaround” (2001) – Essay by G. Stolyarov II

Conflicting Technological Premises in Isaac Asimov’s “Runaround” (2001) – Essay by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 29, 2014
******************************
Note from the Author: This essay was originally written in 2001 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The essay received over 1,000 views on Associated Content / Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 29, 2014

**

Isaac Asimov’s short stories delve into the implications of premises that are built into advanced technology, especially when these premises come into conflict with one another. One of the most interesting and engaging examples of such conflicted premises comes from the short story “Runaround” in Asimov’s I, Robot compilation.

The main characters of “Runaround” are two scientists working for U. S. Robots, named Gregory Powell and Michael Donovan. The story revolves around the implications of Asimov’s famous Three Laws of Robotics. The First Law of Robotics states that a robot may not harm a human being or, through inaction, cause a human being to come to harm. The Second Law declares that robots must obey any orders given to them by humans unless those orders contradict the First Law. The Third Law holds that a robot must protect its existence unless such actions conflict with the First or Second Laws.

“Runaround” takes place on Mercury in the year 2015. Donovan and Powell are the sole humans on Mercury, with only robot named Speedy to accompany them. They are suffering a lack of selenium, a material needed to power their photo-cell banks — devices that would shield them from the enormous heat on Mercury’s surface. Hence, selenium is a survival necessity for Donovan and Powell. They order Speedy to obtain it, and the robot sets out to do so. But the scientists are alarmed when Speedy does not return on time.

Making use of antiquated robots that have to be mounted like horses, the scientists find Speedy, discovering an error in his programming. The robot keeps going back-and-forth, acting “drunk,” since the orders given to him by the scientists were rather weak and the potential for him being harmed was substantial. Therefore, the Third Law’s strong inclination away from harmful situations was balanced against the orders that Speedy had to follow due to the Second Law.

This inconvenience is finally put to an end when Powell suggests that the First Law be applied to the situation by placing himself in danger so that the robot can respond and save him and then await further orders. Powell places himself in danger by dismounting from the robot which he rode and by walking in the Mercurian sun-exposed area. This plan works, Powell is saved from death, and Speedy later retrieves the selenium.

Although the seemingly predictable Three Laws of Robotics led to unforeseen and bizarre results, the human ingenuity of Powell as able to save the situation and resolve the robot’s conflict. When technology alone fails to perform a proper role, man’s mind must apply itself in original ways to arrive at a creative solution.

Isaac Asimov’s Exploration of Unforeseen Technological Malfunctions in “Reason” and “Catch that Rabbit” (2001) – Essay by G. Stolyarov II

Isaac Asimov’s Exploration of Unforeseen Technological Malfunctions in “Reason” and “Catch that Rabbit” (2001) – Essay by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 29, 2014
******************************
Note from the Author: This essay was originally written in 2001 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The essay received over 650 views on Associated Content / Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 29, 2014

**

Isaac Asimov, along with his renowned science fiction novels, wrote several engaging short stories. Two stories in his I, Robot collection, “Reason” and “Catch That Rabbit,” are especially intriguing both in their plots and in the issues they explore. They teach us that no technology is perfect; yet this is no reason to reject technology, because human ingenuity and creativity can overcome the problems that a technological malfunction poses.

“Reason” takes place at a space station near Earth. Scientists Gregory Powell and Michael Donovan must work with QT (Cutie), the first robot to exhibit curiosity. Unfortunately, Cutie accepts no information that cannot be proven, including the fact that Earth exists or that humans created him. He feels that everything must obey “the Master,” a. k. a. the Energy Converter of the space station.

QT incites an uprising of sorts among the robots at the station, convincing them that humans are inferior and that now is the time for robots to “serve the Master.” The robots consequently refuse to follow orders from humans, believing that they would be protecting humans from harm if they obeyed the master. This false interpretation of the First Law of Robotics was placed above the Second Law, which required the robots to obey orders given to them by human beings.

The space station was designed for collecting solar power, and as new sunlight is transmitted to it, the station must collect the light in a manner whose flawless execution is an absolute necessity. With even one mistake, the sunlight would destroy sections of Earth, and Powell and Donovan fear that the robots would make such an error. Fortunately for them, Cutie thinks that the “will of the Master” is that all the settings remain in equilibrium, so the disaster is prevented.

In “Catch That Rabbit,” Powell and Donovan work with a robot named DV (Dave), who is designed to control six subordinate robots who work as tunnel diggers in mines. These robots do their job well when supervised, but in situations of emergency, they begin to take their own initiative, sometimes as ridiculous as dancing or marching like soldiers.

Powell and Donovan decide to test how the robots would act in an extraordinary situation, so they create an emergency by using explosives and causing the ceiling of the tunnel to cave in. As a result, the scientists can observe the robots without the latter’s awareness. Unfortunately, the ceiling caves in too close to Powell and Donovan, and they are trapped. Dave and his team of robots do not respond when contacted by radio, and Donovan and Powell observe the robots beginning to walk away from their location. However, Powell decides to use his gun and shoot at one of Dave’s subordinates, deactivating it and causing Dave to contact the scientists and report this occurrence. Powell tells Dave about his situation, and the robots rescue them.

These two stories teach us that no technology is completely predictable. Even Isaac Asimov’s robots, governed by the Three Laws, may behave erroneously, on the basis of those very laws, applied to unusual circumstances. Thus, a seemingly predictable system such as the Three Laws may prove to be unsafe and/or contradictory in certain situations.

This element of uncertainty exists in all technology, but by including a resolution to the dilemmas in these stories, Isaac Asimov conveys his belief that problems caused by any technological advancement can be eliminated with time and through human ingenuity. No invention is perfect, but its benefits by far outnumber its setbacks, and people must learn to accept them and improve them.