Augmenting VR to not trip and fall

Many of us are experimenting with the DK2 plus cameras and Kinect sensors (for example to think of the Tony-Stark-style holographic interaction demo at last ISMAR presented by Thammathip Piumsomboon). We are trying to have a video-see-through look onto the real world and to allow for interaction (and thus collisions) with our own hands. We are focussing on demos for augmenting the real world.

Now with the Oculus CV around the corner and others joining the party everybody starts to refocus attention to ways how to handle VR well: to create great real-time demos, to interact with those and to be safe! Everyone enjoyed the rollercoaster ride almost tumbling to the floor! So, if we take a look at the health part: what can we do to be safe in VR while completely occluding our visual sensors? We could either just sit down on a chair, move in a well-defined and safe space – or somehow get feedback from reality into our virtual space.

This could either be a translucent virtual world (allowing some percentage of real video-see-through footage to blend in), some sound distance measurement warning signals – or the way that was tried out by Dassault Systemes and David Nahon’s iV Lab team. Let’s take a look:

I like their trials with “auto-detecting” close objects and making them appear only temporarily. The integration of one’s own body (or also others) sure shows us a good way how future remote cooperations in shared VR space could look like! Obviously the Kinect data is too rough and it currently needs more setup time and alignment, but we can see where it might take us! I’d love to see it out in the (consumer) field to maybe even be able to run through my own virtual world outdoors* – only letting drop in reality where needed (certification for public streets might take a decade, though). :-)

Enjoy!

*) Compare LifeClipper four years ago!