Robots must take non-verbal commands as well

Mar 19, 2009 08:55 GMT  ·  By

Despite the fact that, over the years, formal logic has brought robotics a long way to the point where it is now, it may be that this approach to making smarter robots might no longer be sufficient anymore. That is to say, it works fine for beating someone at chess, or for matching web pages to search queries, but it has no use in making robots that are able to associate concepts with real-world objects. It takes a lot more than extensive programming to drive a machine to go fetch a beer from the fridge, researchers say.

This example is one of the best there are, for a simple reason, namely that it combines a lot of actions and a great deal of associations. That is to say, when you ask a robot to go get you a beer, it first needs to realize what a beer is and where it would usually be located. Then it would have to get to the fridge and figure out that it has to open the door before it can help itself to a can. It would consequently have to distinguish between soda and beer and take the latter in such a manner that it doesn't crash the container.

When it would bring it to you, it would need to be able to deliver it in your hand, and not just simply drop it in your lap. And all of these actions it would have to do would be commanded by a simple line, such as “give me a beer.” No matter how much formal logic is involved, this level of coordination could never be achieved. It may be obtained for pre-defined actions such as this one, but if someone asks the machine to bring over a shovel, it will not be able to do so.

Future robots need to be able to learn from experience and to figure out for themselves the actions they have to perform in order to accomplish the command. “People realized at some point that you can only get so far with a logical approach. At some point these symbols have to be connected to the world,” MIT Media Lab AI researcher Matt Berlin explained. “People learn what a word means in a truly grounded way,” he told LiveScience.

“As humans, we can detect where there's shadows, colors and objects. That has proven extremely difficult for robots,” Brown University robotics expert Chad Jenkins added. Most researchers in the field are currently trying to develop technologies that would allow the machines to see the world like we do, in all three dimensions, and to make them distinguish between indoor and outdoor environments. All of these things come natural to us, but are next to impossible to implement in them.