May 31, 2011 13:20 GMT  ·  By

Though humans have it innately, the ability of distinguishing contours in an image is apparently very difficult to install in a machine. This is an essential part of creating robotic vision, it experts still try very hard to get computers to recognize things. A new study now comes to improve these capabilities.

The work addresses one of the central problems in computer vision – detecting the boundaries of objects, explain researchers at the Massachusetts Institute of Technology (MIT), who led this study.

An algorithm the experts created is capable of an efficiency level 50,000 times higher than any other algorithm ever developed. This innovation could advance this field of research by several years.

In humans, detecting boundaries is mostly instinct. We learn to separate cars from trees and other people, the ground from the sky and so on since before we can remember. Emulating this ability in a computer software is however tremendously complex.

The secret, principal research scientist John Fisher explains, is segmentation. He says that the way in which the computer segments the image it sees needs to be very similar to how humans do it.

Fisher holds an appointment at the MIT Computer Science and Artificial Intelligence Lab (CSAIL). He conducted the work with Department of Electrical Engineering and Computer Science graduate student Jason Chang.

But the problem is that every human sees boundaries differently. As such, “we shouldn't come up with one segmentation. We should come up with a lot of different segmentations that kind of represent what humans would also segment,” Chang explains.

“We want an algorithm that's able to segment images like humans do,” he adds. The new algorithm therefore produces different balances between two measures of segmentation quality in order to generate its set of candidate segmentations. This set will then allow it to see boundaries.

Measuring the differences is possible by tracking factors such as color value and contrast. This approach is boosted by the fact that the entire system is trained in the art of seeing things simply.

As such, an image will not feature contours around each single spec of dust or light. It is important to only highlight the important content in an image, the MIT team explains. Other experts have shown an interest in this approach.

“There are a lot of competing methodologies out there, so it's hard for me to say that this is going to revolutionize segmentation [but the method is] an interesting new vehicle that I think could get a lot of mileage, even beyond segmentation,” explains Anthony Yezzi.

The expert, who was not a part of the research team, is a professor of electrical and computer engineering at the Georgia Institute of Technology (Caltech).