World War Z – let the blind see with depth!
Today, on a lazy summer day, I`d like to think a little bit about two things on AR. First, the technological part of tracking and sensors with upcoming vendor´s fights (couldn`t resist the z-depth-buffer joke) and second, the form factor of today`s and tomorrow`s AR devices. Will we really wear glasses soon? What might be the showbreaker? Two cents on it today.
Still buffering… depth sensing to come… some day.
When thinking about consumer AR devices all we have are RGB cameras in phones. There is no other consumer option for private at-home use. Hence we are still stuck with old school vision-based tracking approaches and algorithms like extended SLAM to understand the world at least a little bit for our augmentation purposes.
metaio`s Apple`s ARKit does a pretty damn good surface estimation and let`s us place objects within the screen`s frame. But it remains a visual overlay with no further knowledge of the surroundings. Also, interaction is limited to touch-screen-only (if we don`t consider “change of perspective by moving around” an interaction). Accurate occlusion, exact positioning and distance measurement will only come with more sensoric information. Devices like the Hololens use infrared echo sounding or infrared grid recordings to better estimate depth. Google`s Tango still suffers from non-availability.
No questions asked, depth sensing will allow a lot of new applications including indoor navigation, 3D object scanning and better object recognition. So why does it take so long? Shrinking the tech seems to be a real issue. At least we went down from first Tango devkit over the Lenovo phablet to a real (but still big) smartphone-sized device with the Zenfone AR. But will others follow? Rumors say that Samsung could be integrating more camera sensors (Tri-Cam style) into their new Note 8 and run Tango, too. Will Apple integrate depth sensors (to front and back cameras) as rumored in their release in fall 2017? I do think so. The ARKit was there to get everybody hooked and started. The next ARKit SDK update could include depth access. It should! It has to! Apple can still take the lead here, though Google had the lead technologically here (please hurry up, awesome Johnny Lee!), Apple might leap forward and take over. Having exact depth info will enable way more AR applications. The war on the dominance of THE AR platform is on. But still, one problem remains…
The unsolved interaction issue
… how do we interact with the virtual content? ARKit is fun to drag-drop-place virtual furniture, but you always interact by holding up your phone like a stupid tourist in the Louvre. Real life does not work like that. You want to touch and interact with the objects around you (maybe you shouldn`t do so with the Mona Lisa). ARKit today only allows smudging your screen while moving virtual objects around on the screen. Regards to this issue, first ARToolkit from 1999 or any other marker-based AR solution were better already. Computing power was not good enough for another solution, but the advantage of a printed marker: you can touch and grab some kind of reference object. Placing an IKEA catalogue on the floor might be an example where we can say: happy these days are over. But for direct manipulation of virtual objects, concepts like the MergeVR Cube are way more intuitive to use and direct. This even beats hand gesture interaction in mid-air. A Leap Motion for AR would not solve the problem of haptics and reliablility of our world interactions. With better object recognition (possibly through depth cameras and better algorithms) in the future we can enable any real world object to be our haptic link to the virtual objects. AR needs to get smarter and really see the world and their objects. Right now it`s all still too blind.
Solving the digital tourist syndrome?
If we had slim glasses with accurate depth sensors yet, we could possibly use gestures and any physical tool to interact with augmented digital objects. But there is still some way to go. Inbetween better equipped phones can close the gap and new devices like the passive-tech AR glasses from Lenovo and Disney or Mira Reality could help out to get our hands free again.
I`d always say that this is the way AR was meant to be and to be worn. Only with very slim glasses it can be true AR. But it will take longer to reach a meaningful quality for every-day usage. The Prism glasses and others depend on the smartphone that clicks into it. Will we get a better front-cam in the new iphone that works with Prism (then I guess I have to ditch my Tango) and what will see from Lenovo and others next? These plastic cardboards for AR could go big in marketing very soon, but once you`ve used it, it probably lies around getting dusty – and will be found by the next generation laughing about our baby steps in AR. But, hey, you gotta go through it…
… or de we have to?
In business situations like with logistic pickers or other advertised scenarios from the Google`s Glass relaunch AR smartglasses work already. Does it mean that we all will get assimilated? Will it spread like any other specialized high-tech solution (often military first) like teflon during the Manhattan project or AR-style for airborne trainings (with Tom Furness) before it goes main stream? Or do we reach the end of the line for mainstream before?
AR glasses are distracting in a social context. Today, we are annoyed when someone won`t take out their earpods or take down their sunglasses while in a close conversation. When we take a picture of a group of friends as a memory we are disturbed when someone leaves their shades on (at least for me). When we are having a meaningful conversation the chime sound on your phone can kill the moment. When people meet up for a beer night, they already build towers of smartphones – the one who can`t resist and grabs his or her first pays the next round. Unless you are Steve Mann or the Borg-version of Picard everybody will be freaked out if you have some tech in your face to distract you. Humans are very sensitive when it comes to faces and “things that shouldn`t be there”. So, are AR glasses doomed?
If you use your phone with Tango, ARKit or whatever and do hand-held AR like the tourist you win twice: it is possible today and your friend will know when you are distracted. With glasses (unless the blinking red light is enough) you won`t know… is this a generation issue or a showbreaker? Time will tell of course. Snapchat generation will adopt, but I guess it will take way longer to go mainstream than everybody screams today. Maybe 10 years, 15, 20? If the advantages of it and changes of society demand it, it will happen. But until then I`m also happy to enjoy my supposedly already-dead smartphone, that can rest aside unused, not blocking my view of the sea and my friends.
Cover image CC by timothykrause.