Android towards Realism and Kinecting again!
Hi Folks,
busy days down here in Munich! You are probably preparing yourself for ARE 2011 on May, 17-18th, 2011 in Santa Clara (2010 video remix here), while I’m setting up my stuff for our conference in Munich, 18th – 19th: the RTT Excite 2011 on photo-realistic real-time rendering, VR, digital prototyping and marketing. If you are in Munich: be sure to come over to say Hi in the FutureLab with some neat Mixed Reality demos, too! :-)
Android based Realism
Adding realism to AR by using real world surrounding information (HDRs panoramas, IBLs) is a thing we all want to have sooner than later! We have seen a cool demo of Suomi’s VTT on this, where they fake the surrounding information using the background and scanning a ping pong ball for lighting information.
Now we have another candidate porting a similar approach to our beloved mobile devices. The paper of the University of Münster (Germany) has been published already in 2004 (titled “Virtual Reflections for Augmented Reality Environments”), but now it’s time to hit the mobile world with it!
If you are interested in some tech talk, please read on! :-) Their approach can create CubeMaps for all reflective or lighting purposes in real-time using only the information from the background video frame: they not only flip and stitch the frame six times and mirrored together, but do a pretty smart estimation, that yields in plausible neat-looking reflections (although obviously fake): they project six regions of the background frame (see below) onto the new six sides of a Cube. The general idea is to have reasonable regions to map from. Parts from the left will be put into the left Cube-side, parts from the right into the right Cube-wall, lower parts for bottom, upper parts for the sky-part above. The center of the image will be used for the back-facing side. Well, but then the front cube face is still missing and we don’t get the easy way for it. They approximate the information needed by projecting outermost parts of the image onto the front face. This looks pretty convincing and the singularity in the middle can be neglected. Especially when having non-planar and more complex objects (rather a bunny than a flat mirror…).
This way they get to simulate glossy and diffuse reflections already. Lighting/shadow influence will probably be the next step, since the CubeMap is already there…
Two nice twists are further to it: first, they adjust the region (image below) for the background part by looking up, where the CG will be placed and taking the “behind”-pixels from that area and distorting the adjacent regions.
Second, inter-virtual reflections are made possible by multiple render passes: by setting a recursion depth, multiple objects can reflect each other (including the real-world reflections!), by capturing the rendered image including the 1st object and using this as the input for the CubeMap reflections for the 2nd object. Smart! :-)
Can’t wait to see more complex reflection and light calculations through two and more cameras (back-facing, front-facing, stereo…)!
More Kinect Demos
In other news, AR Door put together another fun virtual mirror dress-up demo for a virtual fitting room for Topshop:
It seems to me they only use the body’s center position for “rigging” the cloth to the person. It is pretty nice already, but I hope we can see even better integration (possibly when the official Microsoft Kinect SDK is there this summer) for the moving bones, also regarding occlusions and hopefully even some integrated cloth simulation, too!
Juggling with the Kinect is what Tom Makin does in his cute video (code here). It just made me smile, especially waiting for the European Juggle Convention 2011 (in August in Munich). So greetings to you, Tom!