inside AR to understand reality
the insideAR conference finished last week and time to put up a quick overview to get an idea for the non-go’ers. I’m also waiting for metaio’s best-of-video* (I hope they compile one again). But until then a shaky video below. Things kicked off with the impressive keynote, that was actually summarizing the last year at metaio pretty completely including live demos for each and everything on stage! Therefore I will present parts of the keynote in the video.
metaio’s business is growing and they started off with the announcement of their new office in New York, stating a factor 5 of numbers of AR apps over the last 12 month with 110.000 developers on their SDK today. The presented new metaio6 SDK ships with massively improved tracking and is RGB-Depth ready, the continuous visual search (CVS) got an update as well as the Creator (version 6 with a improved content pipeline), Cloud and junaio 6.
What were the highlights in there? metaio showed their tech updates in three areas: productivity, location-based services (LBS) and seeamless shopping. The first showed the edge-based tracking (first live demo in the video) that continuously tracks edges and/or other features – whatever works best. Judge for yourself below. The Creator got the appropriate update for it to set up objects for this tracking in metaio6 easily. The second area of LBS presented the solution for indoor navigation where GPS signals will fail. The SLAM demo can be seen next in the video and metaio stated an improved accuracy of 30% and 40% improved robustness. Their 100 m circular run ended up with only 3 meters of error, which is pretty impressive. Having SLAM run on the AREngine hardware chip it’s said to have 100 times less power consumption enabling way longer usage of AR on the go!
A new addition for fast content creation is the POI Creator plugin for Microsoft Excel and a new plugin for the continuous visual search, now allowing content creation from within Adobe InDesign. For the area of seamless shopping metaio presented these updates in CVS that can currently handle image databases of around one million objects and recognition statistics showed almost no false positives at all while having almost 100% correct recognition (using the Stanford reference data).
Further tech demos showed the updated face reconstruction and tracking and a machine learning prototype to recognize facial expression – an important step towards new applications. In general teaching the computers to see to improve AR was the key message: the RGB-Depth camera demo presented with the IKEA furniture and a correctly occluded virtual armchair showed the quality possible with additional sensor information. Finally occlusion data information is finding its way into official SDKs!
Daniel then entered the stage to show ongoing research to let the devices really understand reality. His presentation from ISMAR on coherent illumination for AR was extended nicely. Initially he recorded the user’s face to estimate the current lighting situation by comparing to a big reference database. Then he could overlay virtual helmet or other objects on top of the user’s face. But now he showed a nice twist to it that felt like well-made cheating: this facial light-estimating can be used to create correct cast shadows to the furniture on the floor (you just need to flip light directions)! Take a look! A great sneaky way of getting way better visual consistency. After all: you will always have a human face look onto a tablet/phone device! A smart way to get more information about the world for better AR. Combined with occlusion data we are getting closer to augment the world more convincingly! :-)
PS. The full keynote is now also online on their channel.
*) metaio just put their best-of video online, check it out here to learn more about other demos and the show: