Microsoft Hololens: What are the UX Implications?
Ever since Microsoft introduced the Hololens a couple weeks ago, we've been discussing the longterm implications the tech might have on user experience. When it was first announced, we were a bit thrown by Microsoft (and the press) describing the experience as "your world with holograms". While technically speaking the Hololens has nothing to do with holograms or holography, it does fall into that general category of "augmented reality". Unlike the more immersive "virtual reality" that's delivered by Oculus Rift, augmented reality in this case inserts high fidelity 3D images in ones field of view. No doubt, it's %^&* cool.
But what might this technology portend for the UIs of the future? Is this a brave new world of "augmented reality experiences" that will displace the mouse as the preferred input device? Don't laugh...I've been hearing pedantic punditry all week about how the Hololens is "revolutionary", "a view into the future of UI", and "the obvious evolution of input devices". Well...we've got our doubts.
One might argue that the underlying UI concepts for the Hololens have been around for over 20 years, ever since Jaron Lanier and the folks at Boeing experimented with VR headsets to augment specific real-world tasks, such as fixing a jet engine or identifying a needed part on a building construction site. What's changed dramatically is the "overhead" required to make the experience work effectively: now there's no "data glove", no additional computer bristling with inputs, no difficult to control intermediary display. Everything's been reduced to a fashion-forward-daft-punk integrated headset. The software UI has obviously evolved in a similar way, with far more responsiveness, resolution, and dimensionality. But while the Hololens is definitely awesome (we want one!), we think the UI model is far from being adaptable for general, longterm, practical use.
We think popular adoption won't happen because attempts at "revolutionizing" the basic computer human interaction model just hasn't happened since Douglas Englebart introduced the mouse in 1968. Yeah...that would be almost 50 years ago. With Moore's law churning along all these years, the mouse (and it's stationary cousin the trackpad, and trackball before it), has kept up quite well, thank you. It's stood the test of time not just because it's "what people are used to", but because it strikes a good balance between simplicity, intuitiveness, and power. Ever try the Leap Motion device, which allows you to "reach into new worlds and control your computer in new ways"? One word: "yawwwwwwn". It got relegated to our Black Mercury Box of Misfit Geek Toys a long time ago. Unless it's a special purpose application that hugely benefits from the combining "the real" with "the virtual", we think the Hololens will follow in the footsteps of breathtakingly cool but limited-use products like giant connected touch screens.
But we hear you saying, "Hey, the whole desktop computer UI thing is old school...the future UI is all about mobile contexts." Yeah, that's a good point. Pinch and zoom and all that did establish a new UI paradigm that became widely adopted. While we'll always need to drive our "trucks" with a good UI, it's true that mobile devices are at the vanguard of HCI. So...why aren't augmented reality apps catching on? You haven't been using the "Monocle" view in Yelp as often as you thought you might (neither are we)? You haven't heard of "one of the best augmented reality apps of 2014 , "Dragon Adventure World Explorer"?? Mmm..yeah. Neither did we, until we looked it up. This contributes to our perspective that if we haven't figured out how to create a truly valuable and universally appealing augmented reality app on mobile devices (which inherently should be great for this kind of thing) then headset-mounted / custom API / gestural UI's won't be ready for primetime anytime soon.