At the Intel Developer Forum this week there was a focus on user interface technologies and while I wasn’t expecting this aspect to be important it makes sense once you see the new integrated graphics engine in Sandy Bridge. With the new focus on graphics there was a slew of interesting demos and announcements and I’ll highlight a few here.
Demos – During the Monday keynote from David Perlmutter were 2 examples of advanced user interface technologies and paradigms. Its interesting to note that both demos used a 2-handed interface, something that most consumers still have yet to try or master.
GestureTek, a company that makes computer vision control software and devices showed how you could use your hands to control various media tasks. To use this system you stand in front of a large flat screen tv and using hand/arm gestures you control some type of on-screen activity. In this demo users control a photo slide show where the gestures could be described as a “wipe and swipe” motion. Using one hand you wipe left-to-right to advance through the photos then swipe back to the other side to repeat the action. Or you could rotate the image using 2 hands combined in a similar gesture to how you zoom in and out on a multitouch smart phone screen.
The other 2 handed demo was from Sixense. Their technology uses a handheld controller-like device (think tv remote) and contained inside is a 6 DOF tracker so all movement and orientation is tracked. Their demo showed a very simple CAD environment which consisted of basic squares and spheres. The gesture interaction using those controllers was to use a combination of position, orientation and buttons to pick up and manipulate those objects to perform simple CAD operations.
Unfortunately both of these demos had problems, not only on stage but overall. While its encouraging that Intel is focusing on UI technologies I didn’t see these particular demos as revolutionary but forward thinking. As for the stage demo they were tough to see and describe, and at one point the GesturTek demo just wouldn’t recognize the hand movements. Fair enough and again its really encouraging to see UI be a focus for Intel and their developers. But its not hard to imagine that at a future IDF we could see demos that comes closer to Minority Report.
Finally, I attended a session and demo space for the new Intel Experience and Interaction Lab. Given that the new lab was announced only this summer they are starting off with a strong showing of new products and technologies. One notable demo was the automobile-based context aware facial recognition which could, after visually recognizing the driver, set the radio stations, seat positions and other environmental controls, but could be extended navigation or really any personalized aspect of driving. Another demo showed realtime ray-traced video games. Also there was a demo of surface computing for the kitchen and taken literally could be an indication of how future chefs will plan and prepare meals.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.