Rugged AR for the Industry
Let´s talk about head mounted tablets, allright? What? Tablets? Yep, you read it right. What´s that and where do you use it in heave rugged outdoor industry scenarios? Let´s find out.
I had an interview with Andy Lowery from RealWear – actually I had the first interview since they went public with their upcoming device and the press information! Today, I`d love to share the long talk we had about AR, how the information revolution will affect our lives, their specific hardware and plans of course and the advantages of head-mounted systems. I also had the chance to try out the latest prototype device. But since the interview was just too long to quote it 1:1, let me sum it up for your convenience.
Andy Lowery is surely no noob to the AR scene and the industry. If you work in this realm you must know that he used to be the president of DAQRI who also produce smart helmets but with a different focus according to Andy. Before that, he was Chief Engineer at Raytheon (on electronic warfare) and way back he came from the US Navy being a nuclear surface warfare officer.
During the time at DAQRI he was pushing industrial AR, also working with partners like metaio. The idea came up to fix AR technology to any worker`s hard helmet in the field where needed. Since people wear the helmets anyway it would just be a winning combination and logical step to add the technology that can be easily put on and off along. Since DAQRI was going for another roadmap, he founded RealWear to push into this direction. Chris Parkinson from Kopin (who produced the Golden-i werable system) joined forces and they are getting closer to their release. Time to take a look at the device!
What can it do for whom?
Andy described the history of the development and that in the environments of their clients listed a few special requirements. Other smart glass competitors (like Vuzix) might not comply to these, e.g. in the oil, gas and mining business (out in the field) they need to come up with a ruggedized, dustproof, waterproof – and sometimes even fire or explosion withstanding – design. RealWear describes the system as follows:
“Featuring an intuitive, 100 percent hands-free interface, our forthcoming RealWear HMT-1 brings remote video collaboration, technical documentation, industrial IoT data visualization, assembly and maintenance instructions and streamlined inspections — right to the eyes and ears of workers in harsh and loud field and manufacturing environments.”
They were thinking about the design “what would I do with an industrial device? what would it look like?” and two major aspects are key: it needs to be hands-free (people are working while using the system) and non-intrusive (safety reasons). Andy said people in the field just reject gadgets where you need to use your hands on glossy touch screens (“This is ridicoulus!”). So the complete system is speech-controlled and can be pulled out of your field of view with one move.
Let´s do the Live Demo
So, I could try on the latest design. The first impression was that you don´t really notice the weight. It´s as comfortable as it can get with a hard helmet on. The video screen arm can be easily adjusted so that you have it directly in front of your eye (either left or right) or in a peripheral position to only look at the screen while looking down (to have a clear sight to the front). Image quality and brightness looked good, too. Though I had to find a sweetspot distance and angle first to be comfortable for the longer session.
Then I was looking around the menu and could trigger all commands through speech: open a document, change zoom level, close a window, play a video, open a report, write a report, take a photo, etc. Recognition of these fixed keywords was stable and only got triggered by me (not by the others in the room saying the same phrases). The given tasks worked flawlessly and some small but helpful features make the interaction easier. For example, you can either zoom and pan a document (e.g. a circuit board layout) by speech or alternatively activate a mode to virtually fix the document in the air and change the presented part by moving your head.
Speech recognition is bigger when connected to the cloud so that you can use natural language to write reports by voice, it is restricted to the key words when working offline (currently).
The overlay happens only in the small screen and you get head tracking by gyro and compass. GPS will give your position and the camera can do additional vision based tracking. But there is currently no “immersive AR” as Andy names it: no accurate overlay of information is present today – but can be in the future if the market needs it.
So, he could not show all features since the system was not hooked up to the cloud and company data, but we then talked more about the fields of usage.
Use Cases and Advantages
So as said, they target industry like the oil, gas and mining market, where staff would use their systems on oil rigs, oil platforms or in dangerous spaces. People get instructions, measurement data or blueprints presented. A remote helper could connect to them via telepresence and communicate with the user to support the current task (also adding drawings or markers into the field of view from a distance to point to the right spot). Training scenarios keep being important, too.
For trainings Andy mentioned different studies that showed the advantage of an AR-supported training. E.g. Boing did a survey, where 50 students needed to buid an aircraft wing out of dozens of parts within roundabout 30 minutes. They were untrained and never did this task before. Three groups tried three different approaches: 1) using desktop instructions, 2) using hand-held tablet instructions and 3) using hand-held instructions via a tablet but with an AR view right at the object of interest. The results showed a clear improvement in speed and error-free work with AR. With AR the task was finished 47% faster and only caused 1,5 errors instead of 8 errors average. Other studies even showed a training result with AR that was comparable to an “old school” training with a personal human tutor (and crushing paper instructions totally). These promising results still used a hand-held tablet – the numbers would, so Andy, go up even more when being hands-free.
We talked about other use cases, too. These could be homeland security or the police officer – supporting their tasks with facial recognition (check for registered bad guys) or license plate checks on the go. In general connecting the system to the cloud and big data in the background could dramatically change our digitally enhanced working life. But what is crucial? The interface. Andy stated:
“The 21st century user interface does not have menus, file structures and all that stuff. It knows what you are looking it, knows where you are, knows what you are about to do.”
Systems like amazon`s Alexa or Siri, any intelligent device that has enough information – best including spatial awareness will predict your actions and help you out just in time. Talking about your industrial working day the system will also know your current assignment and role and give best matching information accordingly. Systems like SAP, Thingworx, etc. will be able to connect themselves through the SDK to get this vision working.
With wearables that react to your point in space and time and your current activity an information revolution will happen, Andy assures. It will take (much) more time, but it will happen and be a big game changer – comparable with the dawn of electricity (bringing power tools to the masses starting the industrial revolution and democratising the technology).
Head Mounted Tablet – The Specs & Software
The details on the specs can be found on their page. The system runs on a regular Android – using tablet technology in the end, hence the name. A lot of software has been developed in the past for (rugged) tablets for outdoor usage – these can be easily shifted over to the wearable. The device comes with a battery life of 6-12 hours and has the option of hot swapping batteries during run-time without losing up-time. The camera holds a 16 megapixel chip and stabilizes the image actively. It comes with all the typical sensors in a rugged design.
Getting it on the road – My conclusion
Well, the final design is not available today. So, I couldn´t really give a verdict on the upcoming HMT-1. But if you are interested: they will start a “Pioneer Program” to let you take part in the beta and get the first wave of the device. Final shipping of the device is planned for summer 2017 for around a thousand bucks.
So for now I can only say that the current design already feels lightweight enough for a full day and the complete speech control makes perfect sense in the given environment and worked well in the demo. Connectivity could not be tested, but I can imagine with a regular 4G or 5G uplink you will be able to get your work up and down. It would have been nice to see more real life demo scenarios to judge about the workflow and usability.
AR in this device is no funky AR as we love to see it – and expect from Magic Leap or a consumer Hololens. It just gives you a video screen plus overlaid information on your camera feed displayed on your mono screen. But it feels like a realistic, down-to-earth 2016 use case for the technology for that field. “It gets the job done.” and improves already alot – compared to traditional systems (paper manuals, phone calls, fly in the technician instead of his/her telepresence). We can imagine how things will get even more exciting once we have perfect AR overlays in it. A glimpse of more AR in industrial scenarios as described above can now also be seen in a new Hololens video from Thyssenkrupp. Although, the Microsoft design is obviously not rugged at all, it gives you an idea.
Banner photo (C) RealWear, Portrait photo (C) Tobias Kammann