Advertisment

Google researching on how to build an honest robot

author-image
CIOL Writers
New Update
google e

Making Robots might not be so tough but making them do the right stuff without any undesirable consequences is a task in itself. A robot might help you in cleaning and dusting your house but the two vases and a lamp that it broke in the process might make you wonder was the help really helpful?

Advertisment

Researchers at Alphabet unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI - an artificial intelligence development company backed by Elon Musk - have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They have published a technical paper outlining their thinking.

The research is motivated by the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. But before we let smart machines take their own decisions, we need to make sure that the goals of the robots are aligned with those of their human owners.

“While possible AI safety risks have received a lot of public attention, the most previous discussion has been very hypothetical and speculative,” Google researcher Chris Olah wrote in a blog post accompanying the paper. “We believe it’s essential to ground concerns in real machine learning research and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

Advertisment

The report not only describes some of the problems robot designers may face in the future but also lists some techniques for building software that the smart machines can’t subvert. The challenge is the open-ended nature of intelligence, and the puzzle is akin to one faced by regulators in other areas, like the financial system; how do you design rules to let entities achieve their goals in a system you regulate, without being able to subvert your rules, or be unnecessarily constricted by them?

With this paper, Google and its collaborators are trying to solve problems they can only vaguely understand before they manifest in real-world systems. The mindset is rough: Better to be slightly prepared than not prepared at all.

“With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems,” the researchers write in the paper.

Advertisment

One of the solutions that researchers propose include limiting how much control the AI system has over its environment, so as to contain the damage, and pairing a robot with a human buddy. Other ideas include programming “trip wires” into the AI machine to give humans a warning if it suddenly steps out of its intended routine.

google alphabet