insideAR Wrap-Up (Demos Part 3)
let’s have a final look onto the insideAR 2013 and its demos today! So, what must be mentioned? What is most important from technology or visionary point of view, not listed so far? Let’s see:
The edge-based tracking I already mentioned before the insideAR was now shown live and hands-on. Let’s requote the video:
The neat think about this approach was mentioned before. After seeing it live, I can confirm that it really cool and works as presented above. It sure gives you more freedom in your workflow and makes a setup easier. But for the current demos you have decide on an initial perspective to start from. The edge version of the geometry can be generated automatically (with some user interaction), but you have to pick an angle. But with this preparation the initialization works flawlessly and it snapped pretty fast to the correct camera position and direction.
Once initialized, the known feature-based algorithm kicks in and learns on the fly. No more marker setups or reference images to use beforehand. Neat!
Mercedes-Benz Augmented Driving
The demo that was floating around the web before, could now be seen live – if you took your time to queue and didn’t mind the rain:
The demo itself did not show a HUD-styled overlay on the windshield, as some mock-ups might have made you think, but was completely limited to screens within the car: one version for the driver and co-driver, that showed mostly navigation relevant information and had limited interaction capabilities (for safety reasons). The back-seats had other visual play-outs of the system and a touch-screen enabled them to select points of interest, get further information (mocked-up connection fotos, restaurants, etc.).
The system itself only relied on the internal sensors (accelerometer, compass) and GPS of the car to find its position. No image processing was performed. The underlying camera image was for AR-styled presentation only.
I liked the demo a lot, since it shows how close this is to a product release. They sure put a lot of effort into the screen design, usability and presentational considerations.
On the other hand, I was expecting a bit more from the technology perspective: I sure dreamt of high-speed cameras to do real-time image analysis to identify street signs, information and guarantee a pixel-accurate 3D overlay… But let’s be realistic and go there step by step. :-)
Well received lately was also the “Marta” system, developed by Volkswagen with metaio. On Friday a detailed presentation gave some more insight and feeling of how it operates. It’s cool to see, that finally we get closer to a public release of these kinds of systems! Let me quote their info:
The working methods and sequences of work steps used by employees of Volkswagen Service in their everyday work are highly dependent upon a vehicle’s equipment and features. To make it easier to manage this growing complexity, employees must be efficiently supported in their work activities. This requires advanced development of the classic repair instructions which show the employee how to perform the tasks of the specific job, step by step, with relevant supplemental information such as the tools to be used, assembly configurations and test specifications.
To achieve these goals, Volkswagen developed a new display system for service information, especially for the XL1, which also provides the information on tablets and shows the service employee the next work steps directly. What is known as the MARTA (Mobile Augmented Reality Technical Assistance) system, which was developed together with the company Metaio GmbH, shows real and virtual parts in three-dimensional relation to one another.
Other demos, that should be mentioned:
In the end…
A big thanks to metaio! They did it again to pull off a great show with cool demos and AR news. They do up-scale the Augmented City every year successfully!
If you haven’t read enough about it, you should definitely check out the blog posts from others on it, too. E.g.:
Hope you enjoyed the show and the posts!
See you next time!