The Kinect is at once a niche and – forgive the pun – game-changing technologies. Although it has not quite resonated yet with customers in actual gaming, the potential is there for some fantastic experiences in the coming years.

One issue with all systems that detect motion is accuracy. Version 2 of the Kinect is certainly more powerful than its predecessor, but the nuances of finger movement are still lacking. Enter 'Handpose' a project that Microsoft Research has been working on for some time.

The researchers combined 3D hand modeling with machine learning to "to teach the computer how to infer hand poses".

So what are some possible uses for such a technology? In a detailed blog post, Microsoft suggests these scenarios:

  • Artificial Intelligence - Helping computers interpret our body language, including mood and pointing at objects
  • Video games – Reaching out and grabbing something instead of using a controller
  • Sign language translation
  • General computer tasks – Email sorting, manipulation of objects on screen aka the 'Minority Report' experience

Interestingly, the article does not mention Microsoft's HoloLens, which relies heavily on hand gestures for clicking and object manipulation. It would seem from our experience that HoloLens at least makes some use of this technology, and if not, will do so in the future.

Watch the above videos that explain how it all works or read the detailed article from Next at Microsoft.

Source: Next at Microsoft