Video demonstration

Mar 26, 2009 17:15 GMT  ·  By

With the Windows 7 Natural User Interface, Microsoft is intent to use predictability and reliability in relation to gestures in order to build touch habits. The new Windows 7 interaction model was conceived built on the evolution of the platform's graphical user interface, and to tailor itself to all applications, even programs that were not developed as “touchable.” The video embedded at the bottom of this article will offer a demonstration of the main touch gestures that users will be able to perform with Windows 7, as long as the operating system is running on touch-enabled hardware with the proper drivers installed. There are just a few core touch gestures in Windows, including Tap and Double-tap, Drag, Scroll, Zoom, Two-Finger Tap, Rotate, Flicks, Press-and-hold.

“To be predictable the action should relate to the result – if you drag content down, the content should move down. To be reliable, the gesture should do roughly the same action everywhere, and the gesture needs to be responsive and robust to reasonable variations. If these conditions are met then people are far more likely to develop habits and use gestures without consciously thinking about it,” Reed Townsend, program manager, Microsoft Touch Team, revealed. “We’ve intentionally focused on this small set of system-wide gestures in Win7. By keeping the set small we reduce misrecognition errors – making them more reliable. We reduce latencies since we need less data to identify gestures. It’s also easier for all of us to remember a small set!”

Tap and double-tap are the equivalents of click and double-click. The gestures can be used to select an item, or to open/execute files/programs. Dragging with the finger is a similar move to dragging with the mouse. Users will be able to tap an item and then drag it just as click-and-drag. Dragging left or right will allow the selection of the text. Scrolling is performed via dragging up or down on scrolling pages, but interacting with the content directly and not with the scroll bar. Users will soon discover that scrolling pages have inertia and that they can be tossed, in a manner that will expedite the action.

“In order to make the gestures reliable, we tuned the gesture detection engine with sample gesture input provided by real people using touch in pre-release builds; these tuned gestures are what you will see in the RC build. We have a rigorous process for tuning,” Townsend added. “Similar to our handwriting recognition data collection, we have tools to record the raw touch data from volunteers while they perform a set of scripted tasks. We collected thousands of samples from hundreds of people. These data were then mined looking for problems and optimization opportunities. The beauty of the system is that we can replay the test data after making any changes to the gesture engine, verifying improvements and guarding against regression in other areas.”

Pinching a couple of fingers together or apart will allow users to perform zoom in or zoom out actions. A two-finger tap is sufficient to zoom in the content to the point of the gesture. Two-finger gestures also enable users to rotate content, by performing a twisting move. Flicking left or right will cause the window to go backward or forward. Pressing a single finger down, or pressing a finger down and then tapping with a second one is the equivalent of a right-click.

“Gestures are built into the system in such a way that many applications that have no awareness of touch respond appropriately, we have done this by creating default handlers that simulate the mouse or mouse wheel. Generally this gives a very good experience, but there are applications where some gestures don’t work smoothly or at all. In these cases the application needs to respond to the gesture message directly,” Townsend stressed.