Magic Leap on Twitch: Augmented Reality as our house guest
The spell has been broken and we finally got some more details directly from the Magic Leap team today. I`d love to hack my thoughts on it directly into the blog, in case you missed the show but are interested or if you`d like to further discuss with me. I won`t just recap. So, Magic Leap showed up on the (mostly) game casting platform twitch to give us a first session – of more to come on a monthly basis. Was it worth it? What did we learn? Let`s dive into it.
No, there was no live demo, no glasses to be seen (but some very classy robots in the back) and no specification details or updates on the hardware. But luckily there was also no shameless marketing, but enough thoughts on designing mixed reality. The show was hosted by Alan Noon (Sr. Tech Artist) and Brian Schwab, the director of their Interaction Lab, who took over most of the time and was almost unstoppable (unfortunately only almost). They kicked of with their agenda (Creator Portal launched way back, Lumin OS update today) and gave some examples of community (emulator-based) work. Nothing too crazy yet, so they quickly moved over to the bigger chapter of:
Designing for Spatial Computing
Brian dived in and shared his thoughts on designing for a completely new medium. This demands new design patterns and the creators to rethink their designs, interactions and stories. It`s not only a technical problem – but more of a social or experience issue. If George Clooney shows up as an AR avatar in your kitchen, it might just not work at first. Even if you bypass the visual uncanny valley. It`s just not normal for him to show up there… people need to get used to AR experiences and designers must drag them softly into their mixed reality. That`s only for starters, during the show there were many great examples. But it sets the stage.
Best Practices
When pixel numbers or complexity of scenes were addressed for simplicity you could easily think of hardware limitations (like me, too) that cause it, sold as a feature. But diving deeper into a new experience as MR, it makes sense not to overload the user and leave some space for him or her to remain connected to the real world and adapt to the new virtual items. One still has to interact in the real world (not bumping into walls) and sometimes less can be more as long as it is believable. Designers need to leave space for the user to breathe and operate. Adding to this is audio spatiality that supports tricking our brain. It also helps to find items out of (limited FOV) sight. But interestingly, Brian mentions to misplace out-of-sights objects´sounds to come more from behind the user, as lateral sounds might be not as good to locate for humans than sounds straight from the back. When he took on physicality, physics, gravity and occlusion I thought we might end up with the HoloLens tutorials and alike (make objects more believable when they react as normal world objects), but he went on. Manipulation needs to be natural, rather grabbing objects naturally than aim and telekinesis´ing. Maybe another hint on more natural interaction to come? But their SDK states differently… we have a limited number of gestures, not comparable to Project North Star incl. the Leap Motion full hand tracking tech…
What I liked a lot was his take-on multiuser. Not only because it`s fun to share and possibly work together as a tool in MR – but also to believe easier. When you see a (very convincing) augmented object in your room, (without glasses or if you forgot wearing glasses) you would say: “holy crap, I`m crazy!” – but with others in the room you get direct validation from others. Crucial point!
These kind of interaction practices they will share as code snippets (like the HoloLens HoloKit) via “MagicKit” packages soon. No date given. But it feels like they are preparing some big piles of code to dig through soon. Maybe only sold very well, but it sounds very promising. He went on with design concepts chapter entitled:
Spatial Computing Challenges
Here people raised their ears when the field of view was the topic. But it didn`t go into the direction great Karl Guttag would have loved, handing out tech sheets. It was more about designing with the given (unnamed) limitations. High frequency textures or complex objects that get cut off by the frame harshly can be the worst. But rather than fading things out or not cutting things, he … kept it quite secret. Seems like the content floats around following you (never getting cut off) or keeping it small enough to fit in. A detailed insight was not given. Too bad. Talking about frames: another issue comes with the so-entitled Screen Mode: users that get a virtual screen presented will typically stop moving (like in the real world people starring at phones when something supposedly important pops in): MR paralysis. Love it. But needs to be overcome by better design. At best avoid virtual big screens at all? Rather use info snippets that follow us and only show up when needed? Less is more. When talking about Cognitive Load for the user I loved his example on a good house guest: augmented George Clooney intrudes your personal space and should behave as a polite guest and reacting to your current actions. Well, tough one with current concepts of games, tools and the still striking uncanny valley! Avatars are still way too dumb. AI needs to get better and get more data to work with:
he talks about fallback inputs and how too complex input modes like a gamepad controller distracts too much. In MR we can make use of way more input variables coming from the human body that can make all more natural – but data needs to be interpreted by the system. Seems like Magic Leap is working on their share here and will give us tools at hand soon to better use this data. E.g. let the character George Clooney shut up when we look away or repeat his phrase when we raise our eyebrow. Will the ML1 sensors and software allow all this in their first iteration? AI included? Obviously scary as full eye tracking and such from upcoming Facebook devices and others, but also understandable from a technical point of view: if the system interprets human actions better, it can react better in a more natural way. The uncanny valley today is all about believable visuals of virtual humans or also body movements (remember the first Final Fantasy movies? Brrrr.). But in the next gen this uncanny valley will also be about believable reactions, so Brian. +1 from me!
To better support this they seem to work not only on this kind of sensor fusion of our behaviour elements but also on advanced stuff like environmental understanding. Developers will be given tools to better understand real world objects (again like the HoloToolkit or ARCore/ARKit letting you learn about suitable surfaces to sit on, walls, etc.). But to also continue higher on the ladder: learn about the surface, that it`s a table, that it is my table, a wooden table… I`m wondering how deep this rabbit hole will go in their SDK… and how European privacy policies will go along. But technically promising, though it still seems a bit far off (being in his advanced experimental chapter of the talk). The same applies to his input on story design. Obviously classic computer gaming storytelling does not work anymore. Games like “Fragments” on the Hololens give an idea of how dynamic things need to be (example given by me, not him). Developers need to rethink solutions and ways to get to the next chapter. I hope they can provide us with the right tools to help us out. Will there be a whole MagicKit on AI and virtual character behaviour? I hope so!
Deep technical insights missing?
For me, this was a good first session. I don`t care about the specs too much right now. HW updates will come, but the shift of paradigm and a new media was touched well today. Brian did a very good job giving insights on how to design for an augmented reality device. Time was short, but I hope he will be showing up soon again to continue. Probably he even wanted to continue discussing some tech specs and limitations (like additive display issues not allowing shadows, etc.) but was stopped by Alan. Maybe PR department gave a word on their in-ear speaker. Oh well, nevertheless, I`d say a great kick-off to learn more. If you are an experienced AR goggles developer there were no real surprises. For now, it could have been also a presentation by Microsoft, Meta and such, but my spidey sense is tingling that this could go very well and help creators in close contact with the team. I’m sure they will dive deeper into ML specifics next times. I only wished they`d release some devices into the wild sooner… The emulator is so ugly and the community examples are not yet convincing neither. Oh, be sure to mark your own experiments with #MadeForMagicLeap so that they might include it in the next run of the show!
Keep it coming, Magic Leap! … and possibly allow more of Q&A next time in a sorted way with voted questions. Also, let us know more about the hardware and the planned software infrastructure around it. Well, I`ll tune in again for sure.
PS. If you haven`t done so yet: check out my detailed dissection of the Lumin SDK from Magic Leap on www.vrodo.de (using your German our the next best AI to translate).
[Update May, 3rd]
The first show is now available on youtube to watch. Upcoming shows will also be shared after broadcast. Enjoy!