Current limitations need to be accounted for

Aug 19, 2009 18:31 GMT  ·  By
Robonaut B's upper body can attach to a Segway-built robotic mobility platform (RMP) in order to drive on Earth
   Robonaut B's upper body can attach to a Segway-built robotic mobility platform (RMP) in order to drive on Earth

Science-fiction novels and movies have pictured the future of humankind in a robotized world for a long time, where mechanical entities are self-aware, and perfectly able to fend for themselves, and make decisions. In such a scenario, the Three Laws of Robotics, proposed by SF writer Isaac Asimov in his books, do apply, but they need a thorough reshaping when taking into account today's realities, experts say. Putting too much responsibility on a still insufficiently developed technology could end in disaster.

Present-day robots still lack even the most basic abilities to situate themselves within the environment. They still navigate around obstacles because they are instructed not to slam into them, but fail to realize why they are given these commands. They also lack any autonomy in the true sense of the word, and are still heavily reliant on their creators, the humans. Maybe, in a few decades, robots will become able to adapt themselves perfectly when introduced in a new environment, but they cannot do this now.

“The fascination with robots has led some people to try retreating from responsibility for difficult decisions, with potentially bad consequences,” Ohio State University systems engineer David Woods explains, quoted by Space. Pushing robots in situations that are well beyond them could result in immediate disaster, and people who built them would still be held responsible. That's why numerous protests sparked when some scientist proposed a lethally armed robotized police force.

Woods and other experts say that robotics manufacturers could take some of NASA's examples in handling robots for various missions. Together with Texas A&M University rescue robotics expert Robin Murphy, Woods developed three laws that essentially place humans as the high authority in the human-robot relationship. To better understand these rules, you must first consider Asimov's rules: a robot cannot harm humans, or watch humans come to harm and not interfering; a robot must obey human commands except for when they conflict with the first rule; and finally, robots must protect their own existence, except for situations that come in conflict with the first two laws.

The set of laws that Murphy and Woods propose are as follows: humans must not deploy robotic standards below the highest quality and standards; robots must respond to human orders as appropriate for their roles (and only to a reduced number of handlers); and finally, robots must have sufficient autonomy to protect themselves, but without coming in conflict with the first two laws. The third law also states that a clear and smooth control transition from robots to humans must be enforced at all times, if it is required. For instance, it's imperative that Opportunity doesn't drive in the Endeavor Crater without its JPL control team telling it to do so.

“They understand that when they have a safe or low-risk envelope, they can delegate more autonomy to the robot. When unusual things happen, they restrict the envelope of autonomy,” Woods says of the way NASA handles its robots. “People are making this leap of faith that robot autonomy will grow and solve our problems. But there's not a lot of evidence that autonomy by itself will make these hard, high-risk decisions go away,” he concludes.