Sep 14, 2010 15:16 GMT  ·  By

A team of robotics experts have produced the first machine that can deceive and sneak around, while trying to avoid capture. The accomplishment says a lot about their skills with AI programming.

Analysts call this the first instance in which robots were endowed with the capability to exhibit deceptive behavior. The machines were developed at the Georgia Institute of Technology (Georgia Tech).

“We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine, and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” explains robotician Ronald Arkin.

At first, he goes on to say, he and his group developed machines that were capable of hiding from each other. This was done by first instilling in the machines the ability to recognize potentially dangerous situations.

The automated devices were there made capable of figuring out when a certain scenario warranted craftiness, LiveScience reports.

One of the necessary conditions was to make the robots aware that a meet up between them and the other machines would result in a confrontation.

Another one was showing them that hiding and behaving deceptively could have more benefits than drawbacks in the long run. These abilities were demonstrated in 20 hide-and-seek experiments.

The two robots that participated were both equipped with video cameras, which allowed for them to see their enemy, and also to judge their surroundings for finding a useful place to hide.

A single safe place was constructed to allow for the deceiver to hide, but three pathways led to it. Each of the paths had markers on it, and the follower could tell if any of them had been knocked down by its target.

The deceiving robot naturally headed for the safe place as soon as it was possible, but not before it knocked down markers belonging to a different pathway, so as to send its chaser on a wild goose chace.

The deceiving robot meant to show “for example, that it was going to the right and then actually go to the left,” says Alan Wagner, who is a Georgia Tech engineer.

“The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots,” Wagner says. He adds that the deceiving robots were successful in fooling the other some 75 percent of the times.

The other 25 percent of cases, which resulted in failure, were due to the fact that the hiding robots failed to knock down the markers on the other pathways.

The new achievements are detailed online, in the September 3 issue of the esteemed International Journal of Social Robotics.