New decision-making mechanism created

Aug 25, 2009 22:31 GMT  ·  By
Future robots could have a morality system based on our own, only without any biases
   Future robots could have a morality system based on our own, only without any biases

The goal of offering robots a sense of morality was recently brought one step closer by researchers in Portugal and Indonesia, when they introduced a new approach on decision-making, based on computational logic. Their efforts are described in the latest issue of the International Journal of Reasoning-based Intelligent Systems, AlphaGalileo reports.

Science-fiction authors have proposed the idea of “evil” robots in their book and movie plots for a long time. In most of these instances, the robots turn to behavior that we perceive as bad, as in, for instance, the fact that they attack their creators. But what people often overlook is the fact that the machines would have first had to have an idea of what is moral for a human being. Making this a reality is still some time away, experts believe.

“Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view,” the researchers say in their journal entry. The paper was written by Luis Moniz Pereira, from the Universidade Nova de Lisboa, in Portugal, and Ari Saptawijaya, from the Universitas Indonesia, in Depok. Both of them have developed a keen interest in computational logic and the field of applied robotics over the years.

The new system they developed essentially predicts all possible logical outcomes of any decisions that a robot could make when faced with a dilemma. This would essentially allow the machine to imagine the consequences of its actions, in very much the same way humans do when confronted with a choice. This type of approach, known as “prospective logic,” may offer the first basis of robot ethics, the authors argue.

Further developments in these fields could provide us with robots that can make decisions autonomously at some point, based on the human logic system. “Equipping agents with the capability to compute moral decisions is an indispensable requirement. This is particularly true when the agents are operating in domains where moral dilemmas occur, e.g., in healthcare or medical fields,” the duo says.

The robots could also be used to extract the basic notions of human morality, which could be inferred from the logic and unbiased behavior. The new-found knowledge could then be employed as a basis for a new system of, for instance, teaching logic to small children, in a manner that is not influenced by their tutor's political or religious beliefs.