The new tech was demoed with the help of personal assistant Cortana

Jul 16, 2014 07:32 GMT  ·  By

Microsoft Research presented at the company's Faculty Summit on Monday what they call the most advanced artificial intelligence system which can successfully recognize objects based on information collected through the mobile phone's camera.

The new system is called Project Adam and is based on a neural network, which Microsoft says took no less than 18 months to create. This helped Microsoft make the service 50 times faster than Google's similar technology, but also twice as accurate and uses 30 times fewer machines.

The demo made by Microsoft researcher Johnson Apacible used Cortana to demonstrate that the new AI can successfully tell dogs apart by determining their breeds every time a photo was provided for analysis.

When a photo of a human was sent in instead of a dog, Cortana quickly reacting saying that “I believe this is not a dog,” revealing that the new system can also make the difference between humans and dogs too.

Of course, this is the very beginning of a more advanced technology which could be further improved to help in many research areas, such as developing a system for blind people who would thus receive assistance to determine the surroundings.

“Computers until now have been really good number crunchers. Now, we’re starting to teach them to be pattern recognizers. Marrying these two things together will open a new world of applications that we couldn’t imagine doing otherwise. Imagine if you could help blind people see by pointing a cellphone at a scene and having it describe the scene to them. We could do things like take a photograph of food we’re eating and have it provide us with nutritional information. We can use that to make smarter choices,” Trishul Chilimbi, one of the Microsoft researchers who worked on Project Adam, said.

Microsoft's researchers explain that like all new technologies, it could take a while until it gets improved, but there's a tremendous opportunity ahead which could be used to explain the surrounding world with the help of devices we carry around every day.

“What this system has proven is that, with DNN, you could scale that. You don’t need machine-learning experts trying to figure out what makes this look like a dog. The system learns that on its own. There is this promise of massive scale,” the researchers concluded, explaining that it's all just a matter of time until you'll hear again of this AI.