Experts at UCSD have been behind the innovation

Jul 9, 2009 08:01 GMT  ·  By
A hyper-realistic Einstein robot at the University of California, San Diego learned to smile and make facial expressions through a process of self-guided learning
   A hyper-realistic Einstein robot at the University of California, San Diego learned to smile and make facial expressions through a process of self-guided learning

Using artificial learning parameters, a robot devised by experts at the University of California in San Diego (UCSD) can decide for itself if it's time to laugh, or time to frown. The machine's head looks just like an excited version of Albert Einstein's face, and the team believes that letting the robot decide for itself what kind of expression it would like to adopt is the most efficient way of allowing it to convey the correct emotions to the people it's interacting with.

“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” UCSD Jacobs School of Engineering computer science PhD student Tingfan Wu explains. He presented the new achievement on June 6th, at the IEEE International Conference on Development and Learning. The expert shares that other robots that can make facial expressions in the world today do so because their operators give commands to the tiny servomotors acting on their facial muscles, so as to pull on the face in a specific combination.

But this method is time-consuming, and there are very few engineers in the world today that can create expressions on their robots' faces in real-time. That's why the UCSD team believes that automating the process may prove to be the best idea yet. The immense computing power of these machines ensures that they can operate all their artificial muscles with maximum efficiency, and almost instantly. For example, the new robot has about 30 facial muscles, which are connected to small servomotors via strings.

“During the experiment, one of the servos burned out due to mssconfiguration [sic.]. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos. Currently, we are working on a more accurate facial expression generation model as well as systematic way to explore the model space efficiently,” the researchers said at the presentation.

They also added that facial expression detection software was the key to their success. The robot was left in front of a mirror, and allowed to analyze its own expressions for a long time. Essentially, what that did was to allow robo-Einstein to create an association map between the commands issued to its servomotors and the end-result, as in its outer aspect.