Browsed by
Tag: robotics

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Are We Entering The Age of Exponential Growth? – Article by Marian L. Tupy

Are We Entering The Age of Exponential Growth? – Article by Marian L. Tupy

The New Renaissance HatMarian L. Tupy
******************************

In his 1999 book The Age of Spiritual Machines, the famed futurist Ray Kurzweil proposed “The Law of Accelerating Returns.” According to Kurzweil’s law, “the rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially.” I mention Kurzweil’s observation, because it is sure beginning to feel like we are entering an age of colossal and rapid change. Consider the following:

According to The Telegraph, “Genes which make people intelligent have been discovered [by researchers at the Imperial College London] and scientists believe they could be manipulated to boost brain power.” This could usher in an era of super-smart humans and accelerate the already fast process of scientific discovery.

Elon Musk’s SpaceX Falcon 9 rocket has successfully “blasted off from Cape Canaveral, delivered communications satellites to orbit before its main-stage booster returned to a landing pad.” Put differently, space flight has just become much cheaper since main-stage booster rockets, which were previously non-reusable, are also very expensive.

The CEO of Merck has announced a major breakthrough in the fight against lung cancer. Keytruda “is a new category of drugs that stimulates the body’s immune system.” “Using Keytruda,” Kenneth Frazier said, “will extend [the life of lung cancer sufferers] … by approximately 13 months on average. We know that it will reduce the risk of death by 30-40 percent for people who had failed on standard chemo-therapy.”

Also, there has been massive progress in the development of “edible electronics.” New technology developed by Bristol Robotics Laboratory “will allow the doctor to feel inside your body without making a single incision, effectively taking the tips of the doctor’s fingers and transplant them onto the exterior of the [edible] robotic pill. When the robot presses against the interior of the intestinal tract, the doctor will feel the sensation as if her own fingers were pressing the flesh.”

Marian L. Tupy is the editor of HumanProgress.org and a senior policy analyst at the Center for Global Liberty and Prosperity. He specializes in globalization and global wellbeing, and the political economy of Europe and sub-Saharan Africa. His articles have been published in the Financial Times, Washington Post, Los Angeles Times, Wall Street Journal, U.S. News and World Report, The Atlantic, Newsweek, The U.K. Spectator, Weekly Standard, Foreign Policy, Reason magazine, and various other outlets both in the United States and overseas. Tupy has appeared on The NewsHour with Jim Lehrer, CNN International, BBC World, CNBC, MSNBC, Al Jazeera, and other channels. He has worked on the Council on Foreign Relations’ Commission on Angola, testified before the U.S. Congress on the economic situation in Zimbabwe, and briefed the Central Intelligence Agency and the State Department on political developments in Central Europe. Tupy received his B.A. in international relations and classics from the University of the Witwatersrand in Johannesburg, South Africa, and his Ph.D. in international relations from the University of St. Andrews in Great Britain.

This work by Cato Institute is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

#IStandWithAhmed Tells Us Something about Public School – Article by B.K. Marcus

#IStandWithAhmed Tells Us Something about Public School – Article by B.K. Marcus

The New Renaissance Hat
B.K. Marcus
September 17, 2015
******************************
There’s zero tolerance for drawing outside the lines.

“None of the teachers know what I can do,” said Ahmed Mohamed of Irving, Texas.

Does that sound ominous — or does it sound like any gifted 14-year-old reflecting on his public school environment?

Mohamed is a tinkerer. He makes his own radios and repairs his own go-kart. He has a box of circuit boards at the foot of his bed. In middle school, he belonged to the robotics club, but it’s a new school year, and Ahmed hasn’t yet found a similar niche in high school.

So shortly before bedtime last Sunday, September 13, Ahmed wired a circuit board to a power supply and a digital display, and strapped the result inside a pencil case, hoping to show his engineering teacher what he could do.

Monday morning, his teacher admired Ahmed’s homemade clock. It was hardly his most sophisticated project, but more complex no doubt than anything Ahmed’s peers were doing on their own.

Ahmed’s engineering teacher admired the boy’s handiwork but added, “I would advise you not to show any other teachers.”

So Ahmed followed the advice and kept the clock in his bag — until another teacher complained that it was beeping during a later lesson, and Ahmed made the mistake of showing her his project after class. She told him it looked like a bomb and refused to return it.

A police officer pulled Ahmed out of his sixth-period class and, after questioning him in a schoolroom full of other cops, took him away in handcuffs.

“We have no information that he claimed it was a bomb,” said police spokesman James McLellan. “He kept maintaining it was a clock, but there was no broader explanation.”

Why should this kid have to explain a clock?

“It could reasonably be mistaken as a device if left in a bathroom or under a car,” according to McLellan. “The concern was, what was this thing built for?”

Because Ahmed is Muslim, and because Irving mayor Beth Van Duyne made national news over the summer making what have been generally interpreted as anti-Islamic statements, the Council on American-Islamic Relations has taken note. “This all raises a red flag for us: how Irving’s government entities are operating in the current climate,” said Alia Salem of the council’s North Texas chapter.

McLellan insists that “the reaction would have been the same regardless” of the student’s skin color, but the council is skeptical. Had a blonde Baptist boy brought a homemade clock to school, we would never have heard anything about it.

But is Ahmed’s treatment only a story about anti-Islamic hysteria?

“The concern was,” according to the police, “what was this thing built for?”

It was built to tell the time. It was built to impress an engineering teacher. It was built to help a talented boy find a place at his new school where he could fit in.

But it wasn’t assigned. It wasn’t sanctioned. Like Ahmed himself, the jerry-rigged timepiece doesn’t fit the expectations of the local powers that be.

The engineering teacher understood — and he warned Ahmed that no one else would. That tells us everything we need to know about the people responsible for Ahmed’s education.

B.K. Marcus is managing editor of the Freeman. His website is bkmarcus.com.

This article was originally published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author.

Conflicting Technological Premises in Isaac Asimov’s “Runaround” (2001) – Essay by G. Stolyarov II

Conflicting Technological Premises in Isaac Asimov’s “Runaround” (2001) – Essay by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 29, 2014
******************************
Note from the Author: This essay was originally written in 2001 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The essay received over 1,000 views on Associated Content / Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 29, 2014

**

Isaac Asimov’s short stories delve into the implications of premises that are built into advanced technology, especially when these premises come into conflict with one another. One of the most interesting and engaging examples of such conflicted premises comes from the short story “Runaround” in Asimov’s I, Robot compilation.

The main characters of “Runaround” are two scientists working for U. S. Robots, named Gregory Powell and Michael Donovan. The story revolves around the implications of Asimov’s famous Three Laws of Robotics. The First Law of Robotics states that a robot may not harm a human being or, through inaction, cause a human being to come to harm. The Second Law declares that robots must obey any orders given to them by humans unless those orders contradict the First Law. The Third Law holds that a robot must protect its existence unless such actions conflict with the First or Second Laws.

“Runaround” takes place on Mercury in the year 2015. Donovan and Powell are the sole humans on Mercury, with only robot named Speedy to accompany them. They are suffering a lack of selenium, a material needed to power their photo-cell banks — devices that would shield them from the enormous heat on Mercury’s surface. Hence, selenium is a survival necessity for Donovan and Powell. They order Speedy to obtain it, and the robot sets out to do so. But the scientists are alarmed when Speedy does not return on time.

Making use of antiquated robots that have to be mounted like horses, the scientists find Speedy, discovering an error in his programming. The robot keeps going back-and-forth, acting “drunk,” since the orders given to him by the scientists were rather weak and the potential for him being harmed was substantial. Therefore, the Third Law’s strong inclination away from harmful situations was balanced against the orders that Speedy had to follow due to the Second Law.

This inconvenience is finally put to an end when Powell suggests that the First Law be applied to the situation by placing himself in danger so that the robot can respond and save him and then await further orders. Powell places himself in danger by dismounting from the robot which he rode and by walking in the Mercurian sun-exposed area. This plan works, Powell is saved from death, and Speedy later retrieves the selenium.

Although the seemingly predictable Three Laws of Robotics led to unforeseen and bizarre results, the human ingenuity of Powell as able to save the situation and resolve the robot’s conflict. When technology alone fails to perform a proper role, man’s mind must apply itself in original ways to arrive at a creative solution.

Isaac Asimov’s Exploration of Unforeseen Technological Malfunctions in “Reason” and “Catch that Rabbit” (2001) – Essay by G. Stolyarov II

Isaac Asimov’s Exploration of Unforeseen Technological Malfunctions in “Reason” and “Catch that Rabbit” (2001) – Essay by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 29, 2014
******************************
Note from the Author: This essay was originally written in 2001 and published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The essay received over 650 views on Associated Content / Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  
***
~ G. Stolyarov II, July 29, 2014

**

Isaac Asimov, along with his renowned science fiction novels, wrote several engaging short stories. Two stories in his I, Robot collection, “Reason” and “Catch That Rabbit,” are especially intriguing both in their plots and in the issues they explore. They teach us that no technology is perfect; yet this is no reason to reject technology, because human ingenuity and creativity can overcome the problems that a technological malfunction poses.

“Reason” takes place at a space station near Earth. Scientists Gregory Powell and Michael Donovan must work with QT (Cutie), the first robot to exhibit curiosity. Unfortunately, Cutie accepts no information that cannot be proven, including the fact that Earth exists or that humans created him. He feels that everything must obey “the Master,” a. k. a. the Energy Converter of the space station.

QT incites an uprising of sorts among the robots at the station, convincing them that humans are inferior and that now is the time for robots to “serve the Master.” The robots consequently refuse to follow orders from humans, believing that they would be protecting humans from harm if they obeyed the master. This false interpretation of the First Law of Robotics was placed above the Second Law, which required the robots to obey orders given to them by human beings.

The space station was designed for collecting solar power, and as new sunlight is transmitted to it, the station must collect the light in a manner whose flawless execution is an absolute necessity. With even one mistake, the sunlight would destroy sections of Earth, and Powell and Donovan fear that the robots would make such an error. Fortunately for them, Cutie thinks that the “will of the Master” is that all the settings remain in equilibrium, so the disaster is prevented.

In “Catch That Rabbit,” Powell and Donovan work with a robot named DV (Dave), who is designed to control six subordinate robots who work as tunnel diggers in mines. These robots do their job well when supervised, but in situations of emergency, they begin to take their own initiative, sometimes as ridiculous as dancing or marching like soldiers.

Powell and Donovan decide to test how the robots would act in an extraordinary situation, so they create an emergency by using explosives and causing the ceiling of the tunnel to cave in. As a result, the scientists can observe the robots without the latter’s awareness. Unfortunately, the ceiling caves in too close to Powell and Donovan, and they are trapped. Dave and his team of robots do not respond when contacted by radio, and Donovan and Powell observe the robots beginning to walk away from their location. However, Powell decides to use his gun and shoot at one of Dave’s subordinates, deactivating it and causing Dave to contact the scientists and report this occurrence. Powell tells Dave about his situation, and the robots rescue them.

These two stories teach us that no technology is completely predictable. Even Isaac Asimov’s robots, governed by the Three Laws, may behave erroneously, on the basis of those very laws, applied to unusual circumstances. Thus, a seemingly predictable system such as the Three Laws may prove to be unsafe and/or contradictory in certain situations.

This element of uncertainty exists in all technology, but by including a resolution to the dilemmas in these stories, Isaac Asimov conveys his belief that problems caused by any technological advancement can be eliminated with time and through human ingenuity. No invention is perfect, but its benefits by far outnumber its setbacks, and people must learn to accept them and improve them.