The launch of Windows 8, and a user-experience that is so much more delightful under the fingers, has heralded a point in time where we think about creating experiences that are “touch first”.  However, at the same time, there’s an incredible amount of innovation around Natural User Interfaces (NUI) that is anchored in speech and gestures.

There are many who believe that it’s not ergonomic or desirable to want to swipe on a laptop screen; we even talk about “gorilla arm” to describe tired arms from inefficient form factors.  Having worked with a touchscreen laptop for some time however, I can attest that it is the most natural and desirable of interactions, and you will find yourself sitting at an “old fashioned laptop”, swiping the screen and then in frustration, reaching for the mouse or trackpad with realization and resignation.

Touch is natural.

It is my belief that we will blast past touch as in input metaphor before know it; the keyboard may have been with us since the 1870s, the mouse since the 1960s, but I believe that touch will be old before it is new.  It is merely a springboard to gestures.

Microsoft Kinect first brought gesture recognition at scale into our homes, and in an increasing number of instances, our workplace.  In the intersection of off-the-shelf hardware and innovative software, we can map out an entire room, and follow the movements and gestures of one or more people in that room.  Simple hardware such as low-resolution RGB cameras, IR sensors, an array of relatively inexpensive microphones, allows this kind of technology innovation to be delivered at consumer marketplace scale.  Seamless integration with innovative software allows tracking of up to 48 different body parts, 30 times a second, predicting the time of flight of body parts to be able to “see” them even as they disappear behind a table or chair, or another person in the room.

However, Kinect operates at “room scale” – the experiences that most benefit from Kinect are those that are whole body experiences, such as stepping into a game, or “gross motor movements” such as waving a hand.

Today, Primesense (who developed the 3D sensing technology in Microsoft Kinect) announced they will unveil a compact 3D sensor that will be targeted at consumer electronic and mobile devices, to be unveiled at CES in January for launch in middle of 2013.  No need to sharpen your fingers, just sharpen your gestures…

An area of intense interest for me, is how this kind of gesture recognition moves from “gross motor” (waving your hand to get the attention of your television, like a Samsung Smart TV) to “fine motor” (pointing your finger at a sentence or icon, or swiping your finger as if flipping the page on a Kindle).  This kind of “near field interface”, is I think an emerging area of hardware and software intersection, that will enable a whole new slew of more natural digital experiences.  Leap Motion have gained a lot of mindshare with their yet to be released platform, which indicates I think exactly the kind of natural interactions that “near field interfaces” can deliver.

But I expect that mindshare will not reflect market-share, and we will see many more entrants and innovations in gesturing to the devices on our desks and in our pocket.