Every pixel on the screen of the latest generation of Microsoft Surface can literally see the user interaction thanks to the PixelSense technology.
Hands-on experiences of PixelSense are only available in the Samsung SUR40 for Microsoft Surface 2.0, and if you ever wondered just what this innovation is all about, then watch the video embedded below.
Microsoft put together the video to explain Microsoft Surface 2.0 PixelSense, offering a rare glimpse at some of the great minds behind the project.
“Microsoft’s PixelSense, in the new Samsung SUR40 for Microsoft Surface, allows a display to recognize fingers, hands, and objects placed on the screen, enabling vision-based interaction without the use of cameras. The individual pixels in the display see what’s touching the screen and that information is immediately processed and interpreted,” Microsoft’s Luis Cabrera stated.
Cabrera uses the eyes plus brain analogy in order to explain the concept of seeing pixels made possible by the combination of PixelSense and Microsoft Surface 2.0.
“You need both, working together, to see. In this case, the eye is the sensor in the panel, it picks up the image and it feeds that to the brain which is our vision input processor that recognizes the image and does something with it. Taken in whole…this is PixelSense technology,” he notes.
The PixelSense eyes of the Samsung SUR40 LCD pixels can see not only fingers, but also objects, tags and blobs, as long as they come in contact with the display.
“IR back light unit provides light (though the optical sheets, LCD and protection glass) that hits the contact. Light reflected back from the contact is seen by the integrated sensors. Sensors convert the light signal into an electrical signal/value. Values reported from all of the sensors are used to create a picture of what is on the display,” Cabrera explains.
Such pictures are analyzed by the hardware over and over again, and only after its evaluated and understood, does the PC at the heart of Microsoft Surface 2.0 get the information.