Sound and other Senses
Always I appreciate AR work that tries to embrace more senses than our vision. Augmented Reality is tending to focus only on good looking visuals. In 2010 people even started to forget that AR can and should be more than a visual overlay on a marker or a video background for some floating CG elements. Obviously the visual part has the biggest impact that almost anyone can immediately understand and say “wow” (excluding the blind unfortunately). Especially sound seems to be under represented (as stated various times earlier by me, e.g. at the Besser Online conference 2010). The following blog post is supposed to show quickly a few ideas to augment the augmentations with other senses’ input or output.
Sound Output
First, there are already ways of augmentation via sound. Museum guides with a keypad (for entering the art object’s code) is already an early form of sound based AR. Companies like DNP pushed this further for augmented museum guides and we’ve seen a combination of those with AR binocular metaphors. The following demo however combines other concepts. The visual part is being projected by a moving projector (following the target) and directed sound is added to seem to be appearing from the object’s position. It feels to be generated at the correct source – where the augmentation is visible. (Unfortunately it’s impossible to judge on accuracy via youtube.) The system was developed by Alvaro Cassinelli and Alexis Zerroug at the University of Tokyo, Ishikawa Komuro Lab:
Sound Input
James Alliban, self-titled Creative technologist, creates augmentations via sound input. This has been my favorite video this week, since these days we all totally focus on body movements (with a hacked kinect) and – again – forget about other senses, we could easily integrate (yeah, smell did really already fail during the first smell-o-vision trials). He created 3D pop-out sculptures by whistling and reacting to the generated sound signals. He envisions ubiquitous AR once we have HMDs. We’ve seen concepts for AR graffiti before (like tagdis or as in here), so it would be indeed cool to automatically visualize kinds of city flow and ubiquitous graphics and people’s creative audio output with AR!
James Alliban talks about it more on his blog and lets you download his app here.
Tactile Input
Well, this is kind of a gray area of real tactile feedback. We have smartphones that vibrate when touching buttons, we have force-feedback joysticks, but we cannot yet really touch and feel projected buttons or screen areas. Scientists from Oulu, Finland, have continued the MIT story to project input information and use your own body for tactile feedback. What they say:
The departement of Data Processing from the University of Oulu in Finland presents the AR project called Paula which is an AR smart interface using your hand gesture. In the video bellow you will see a user making a phone call using Mobile Augmented Reality. The user has only the hand recognized by the system an the see-trough AR glasses. This is a vey interesting prototype! The team behind this AR application is looking for investor to help them to produce a real product from this prototype.Watch the video demonstration on AR newsroom.
Non-visual augmented reality
Talking about the blinds, that have no chance to see the colored polygons we throw around, we have a great work from students at the German University of Konstanz. Their system is called NAVI: Navigational Aids for the Visually Impaired.
NAVI works something like this. The infrared camera from a Kinect system is mounted to a helmet that can be worn by a blind person. The visual data from that camera is turned into a set of audio instructions that are then transmitted to the wearer via a wireless headset. The system also features a standard camera added as well, allowing for a kind of three-camera stereoscopic vision. Certain items, such as door, will trigger events, such as a countdown, to prevent users from walking into the aforementioned door, in a kind of augmented reality.
The goal is to be able to give a blind person warnings about potential obstructions and directions to navigate in set spaces, at a longer distance than the current systems in place. Though this Kinect will be much more of a supplement to seeing eye dogs than a replacement for them.
The system is also paired with a vibro-tactile arduino system in the belt, that will most likely act as a warning system, should an obstacle come perilously close to the user. No plans for commercial release of the system have been mentioned at this time.
Here we have a (today) funny looking system, that does a great job: to aid persons with their tasks in a non-visual way. Especially guidance systems could profit way more of audiovisual augmentations. Definitely it’s not the best way for all tasks having an annoying digital prompt in your ear. ;-)
OK, got it. There is more to AR than vision. But I want to see something cool for my bored eyes!
- So, try out the feature tracking Seac02 shows us. The quality is really stable and the graphics get better all the time. This way we won’t be able to distinguish from real and virtual in a few years. Only the logic tells us that the x-ray vision must be in our glasses. ;-)
- Mattias Wozniak and Björn Svensson from Sweden show us in their concept video for a master thesis how AR could look like in our daily lives. Nice 1!
Enjoy the weekend!