Introducing the Kinect

Hey everyone,

I know it’s too late to announce the Kinect for you technology sucking early adopters. ;) But maybe there is a single person out there, who didn’t follow other sources during my holiday: so, I wanted to point to some nice videos explaining what you can do with it for the weekend to enjoy. I just don’t have time for more today.

You must have heard of Kinect aka Project Natal from Microsoft: for starters it looks like a new Sony EyeToy USB camera – by Microsoft. Connect it to your gaming box and enjoy some imagery! But it’s way more than that. It also ships two additional infrared cameras using the time of flight principle to measure distance to objects in front of the camera (rather than only measuring the color value as RGB we know from a normal webcam). Update: it actually uses stereo triangulation with the help of an IR pattern. Why is this piece of hardware so great? Because it’s so damn cheap and starts nothing less than a revolution! To visualize kick off with a USA Today Microsoft marketing interview video showing the console add-on in action:

To see what marketing people planned it to be go and see the Microsoft advertisment videos.

OK, now you will have to admit, this is cool. But being a Augmented Reality only focussed person, you might ask: so, what? Why is this AR? Well, the truth is, the videos seen above are not AR. But the device Microsoft created (with the help of buying some small Israeli company a few years ago) holds a lot of potential for AR. It’s the perfect (first release) tool to get to solve some major problems AR is facing today!

Two mention only two issue for now, I’d like to point out the always-fun-to-dream-up issue of future gesture interaction. A new MIT video gets pretty close to the well quoted Minority Report, as can be seen here:

I’m always complaining about how we interact with AR objects today (hold a marker paddle or cube, etc.) and getting a better gesture interaction could really help big time in AR.

Secondly – close connected to gesture and hand interaction – is the occlusion problem in AR: a virtual object will always overlay the real world and don’t fit in regarding a possibly stacked positioning behind other objects. This well known occlusion problem has been solved for well-known spaces (by pre-modelling occlusion geometry) or has been tricked by using simple chroma keying techniques to give the illusion of proper depth placement. But now we can actually come up with a much better solution by taking into account the depth/distance of each pixel captured by our pimped webcam! Metaio gives us a first glimpse on their research here:

There is so much more to come. This is all the time I have for today, so last but not least be sure to check out the 3D reconstruction (another great feature for AR to reconstruct real rooms) that was already well documented by Rouli on Gamesalfresco. (Damn, I’m way behind by being a month offline! ;-))

Next week I’ll continue on more cool demos and future visions on this topic! :-)

Leave a Reply