The technology that will enable them to do that is under development

Mar 5, 2012 11:54 GMT  ·  By

Fully aware of the fact that our eyes relay vast amounts of information about ourselves, researchers at the University of Wisconsin-Madison (UWM) are currently researching a new technology, which will allow robots to basically teach humans various things.

In order to do that, the machines will have to become excellent judges of the psychology of human body language. This is a very complex task, since not even psychologists know the full influence that body language has on the way we behave towards each other.

One of the questions that require an answer is to what extent does body language supersede the things we actually say or do? UWM investigators are working hard towards creating machines that are capable of picking up the way our bodies behave, and act accordingly.

Another possible application for this research could be to make future robots and animated characters appear more human-like. There are several advantages to being able to do that, including the fact that humans will be more likely to cooperate when working under the instructions of a robot.

While this may seem like science-fiction to many, remember that Robonaut-2, a machine assistant aboard the International Space Station, will soon begin to help astronauts during spacewalks.

Since a machine can hold a lot more data than a human can, it stands to reason that the robotic assistant of the future will hold the blueprints or schematics of whatever devices humans are trying to repair.

Under such circumstances, trusting the robot's instructions could prove to be of paramount importance. The UWM group believes that its research may help consolidate this trust, and make it easier for joint efforts to succeed.

“It turns out that gaze tells us all sorts of things about attention, about mental states, about roles in conversations,” UWM computer scientist and study team member, Bilge Mutlu, explains. The work is being carried out with support from the US National Science Foundation (NSF).

“These are behaviors that can be modeled and then designed into robots so that they (the behaviors) can be used on demand by a robot whenever it needs to refer to something and make sure that people understand what it's referring to,” the investigator adds.

“The goal of the experiment is to see if we could achieve a high-level outcome, like learning, by controlling an animated character's gaze,” concludes UWM computer scientist Michael Gleicher.