FMX: VR Technology for Storytellers in 2016

Today, I’d like talk about a few take aways from FMX regarding the mixed reality continuum. Admittingly it’s almost completely about VR – typically people carry around the acronym “AR” in their Powerpoint titles, but don’t really thing about it (which is a shame). Often it has been “VR is so great blabla and today we can do this-and-that… oh, yeah, and there will be AR too someday – somehow”. Nevertheless, VR is pretty damn cool and worthwile for sure. So, let’s dive into it and I will point to AR comments where possible.

Classic Story Telling

At FMX we typically encounter ready-made art, pre-prendered stories, pre-written suspense, pre-selected angles and cuts, edited to fit a stand-alone “created-once, always-valid” single vision. The director of photography selects an angle, the director and editor find the best moment to speed up time through cuts and location jumps and the art department and the CGI and renderings artists let it appear visually appealing. They have all the time in the world for a single result (e.g. the Jungle Book had 240 million render farm hours with 1.984 TB of data). Last years game engines caught more and more attention – for quick previs during the production (sorry, I have to brag here and say: that took a while, guys! We did this in 2005 with Quake3 for movie previs). But 2015/16 everybody realizes that VR will really arrive and be there to stay. Everbody is in their trial & error research state and has a mobile and tethered VR demo up their sleeves. So, what’s the state today?


Current Technology & Constraints

Regarding the current state of technology everybody was pretty excited about 2016 – kind of that historians could say “the year VR broke through” and people do their Kniefall to VR all of a sudden, seemingly apologizing for their unawareness over the last years.

It was often said that we are now in the lucky days were VR hardware and software becomes broadly available – and can be dirt-cheap! (For the German readers: when Tchibo sells VR cardboards, you know it’s mainstream! But don’t run away!) Currently we are in the wild west of VR dawn and there are no real standards – regarding SDKs or guidelines. People might moan about it, but can also see it as a chance to have faster evolution cycles as a standard committee is not stopping innovation just for the sake of compatibility (especially in Europe industry standard definition can take forever!). Nevertheless, I hope we can reach some kind of open standard for VR, when you think of it as a the successor to the internet!

So, what is possible today? We see many single-player “VR experiences” that last a few minutes using gamepads or tracked controllers. Some with room-scale movements, some without. If we talk about captured live performance we mostly talk about mono 360° video content, if it’s full CG, we reach stereo. Some demos extend the volume to move through by external tracking and backpacked computers to be wireless or allow a multi-user experience with added 5-point motion tracking per user (head, hands, feet). Tracked controllers allow us to interact almost naturally with virtual objects.

What we don’t see are AR demos (not ready yet) nor high-end renderings on untethered VR HMDs. Neither do we see remote cooperation or education tasks in VR (but to be fair: FMX is about entertainment and storytelling). Body suits for fully inside-out motion capture are available that could be hooked up to VR – but would you want to jump into the sweaty suit from that guy before you in line? Missing completely are live demos allowing gesture interaction with our own hands. It’s just not there yet as it seems (though research on it was presented and everybody is hooking up their Leaps to their HMDs at home).

In any case we do have a good toolset to start with in 2016! So, what do we do with it and what do we need to consider? How will the future be with it? I’ll dive into this with the next post!