First ISMAR demo impressions
Today I want to share my first demo impressions of ISMAR 2014 in Munich. Unfortunately I could not record or see every single demo, so there might be some great pieces missed out on. But let me mention a few demos/papers explicitly, that are also visible in the video I will upload very soon!
Magic-lens Paradigm for Dual-View
First demo I liked a lot, was “A Magic-lens Paradigm Designed to Solve the Dual-view Problem” by Klen Čopič Pucihar and Paul Coulton. Using AR via a smartphone/tablet it might be easy to change perspective and examine augmented objects, but actually working with those (e.g. making annotations, select parts, etc.) can get tricky with a handheld device. I like the small but great idea of switching the presentation on the fly to interact easier. I consider it a great demo to again start thinking about interaction metaphors for an AR world where mid-air gestures or shaky hand-held pinch actions might not work or are just impractical, way slower or less accurate!
Reconstruction demos
Different demos were shown (typically using RGBD cameras) on reconstructing the real space. Intel showed their Pirate game demonstrator, that scans the real world (cleverly integrated into the gameplay as a task for the user) and then flood the real space with water (using the created geometry). Intel will be releasing their tiny RGBD cameras Q1 2015 to be integrated into tablets or phones. Qualcomm showed multiple demos from their R&D department. I liked the quality (7 mm accuracy) and the speed of their reconstruction algorithms, that also come with a neat twist: the system can recognize if some geometry describes a planar surface (a table, the floor, a wall, …) and will then use a simple polygonal description for that part and avoid millions of vertices. This can speed up the overall software but also gives further knowledge to the application: probably interaction will mostly happen with objects (that are not planar surfaces) and knowing about the fixed walls in your space will enable smarter logics as well.
The dense planar SLAM demo (using an Oculus with attached RGBD-cameras) reminded me heavily of Keiichi Matsuda’s kitchen demo from the Domestic Robocop. Keiichi, we are getting closer! (Well, hopefully not for ads.) Neat!
Diminished Reality Demo
There have been some great videos in the past from VTT from Finland on AR and especially on Diminished Reality. Sanni Siltanen presents her work on DR and and MR: automatically scrubbing out furniture from your view in real-time to be able to integrate the new furniture pieces easily (see picture at the top). The demo showed the great potential and including a good automatic filling up of the diminished areas with convincing lighting on all areas. But some interaction with the user is still needed (define a volume to clip out, define the edge of the wall). This could still get optimized, although some interaction is obviously always needed (minimal: click on a single object to select). The 2nd part of it I really liked, too: talking about MR as Mediated Reality (as coined by Stratton and later Steve Mann of alternating perception of reality). But here it means to manipulate existing objects in real-time. E.g. seamlessly changing the color of the real couch!
Mixed Reality tabletop studies
To also show a different area than just tablet and Oculus-AR I included the cool demonstrator from some nice people from University of Würzburg (see their project page for more info and better footage). They use a Surface table as the base for a really mixed reality experience to create a multi-modal-interaction table top game (in cooperation with Quest’s distributor). I liked this approach of physical objects enhanced virtually but easily visible for all. They want to further extend this mix with hidden information for each player (using their smartphones), but to keep a joint experience in the center (being the objects on the table). I hope they do extend the demo further and also make further use of the possibilities. For example by showing a real terrain on the table or by using depth cameras to further extend the mixing of realities (scanning physical landscape objects or allowing gesture interaction everywhere not only at the leap). Great stuff, but I think I’d prefer physical dice to be thrown – enhanced by a BOOM! when reaching the needed number of eyes.
I will just finish up the video that also includes the demos mentioned today. Then also with my two favorite demos coming up next! Stay tuned!