Thu, 14 Jan 2016 14:39:50 GMT
Before the holiday break, I was included in an LBB editorial piece answering what technology I would want most for Christmas. Yet to be released at that time, I chose the Microsoft HoloLens, for the very technical reason that it just looks cool. Well, to my surprise my wish sort-of came true, as I was able to secure a spot at one of the developer demos at the NYC roadshow that opened up over the holiday break. And while I may not have my very own headset (yet), I was able to get hands-on and play with one. And it was awesome.
I arrived at the Microsoft store on 5th Avenue and was whisked away to a secret testing facility where I joined two other people for our exclusive preview. After a short demonstration on how to wear and use the device, we each broke off into three one-on-one demonstrations, each focusing on one HoloLens application. The demos carefully took the wearer through a set of examples made to highlight the capabilities of the HoloLens in different uses.
The first experience focused on building and modeling in 3D space. It started by having me pull a 3D object out of a virtual toolbox and center it in the middle of an empty room. This was the first impression I was given of how the HoloLens positions virtual objects in real space in a room. As I walked around the room looking through the device, the 3D object hovered perfectly in the center and felt like it was actually there. From a technical viewpoint, there was no “slippage” or jitters with tracking, and the framerate of the device was quick enough to keep up with my eyes to make it feel seamless. Of course, the true value here is that you can walk around the object as you work with it, giving you a true feeling of what the representable size could be. Many commands were done by voice control, with myself having to verbally tell the HoloLens to 'copy' or 'rotate'. The conventional control for accepting - the equivalent of a mouse click or screen tap - was the air tap. This gesture essentially involved sticking my arm out straight with a closed fist and finger pointed upwards, and then making a quick, stiff downward movement with my finger. The more mesmerized I became by the applications, the more sloppy I got with my air tap, either drifting my arm out of the device’s view or flimsily moving my finger so it wasn’t detected. But as with most things, a few more goes at it should make it more natural.
Next, I moved on to the game experience, which consisted of a simple shoot-em-up game. I was given an Xbox 360 controller to use. My role was to defend the room from invading robots that entered through shoots that burst through the walls. The demo illustrated how games can directly make use of the environment the user is in. The enemies adapt to the surfaces, so it appears as if they are actually scurrying around the room you’re in. The demo included a destroyable environment, so shooting a wall (missing a target) caused the wall to explode and scaffolding to be exposed. And like the previous modeling demo, the ability to make the 3D geometry attach and track to the real world object was impressive, with very little jittering or misalignment.
The last room focused on virtual storytelling and was directly curated to target advertisers and marketers. Given Firstborn’s history with creating VR experiences for the Oculus Rift and Gear VR for brands like Mountain Dew, Audible, and Patrón, this was an area I wanted to focus on. (Note: Microsoft made sure to not use the terms “virtual reality” or “augmented reality”, instead opting to frequently call it “mixed reality”.) The demo here started off with a simulated floating watch above a physical stand, which swiftly moved into the watch exploding so that all the little gears and servos could be exposed. I could then walk around it, look at it from every angle, and see more information by looking directly at certain hotspots. The resolution on the projection was high enough to easily render readable text. Considering the low-resolution screen found in the first Oculus developer kit, this higher resolution in the pre-development kit of the HoloLens seems promising (and offers a clue to the current $3,000 price tag.)
After exploring the watch some more, the demo focused on using the HoloLens to build and give presentations. I doubt I’ll be making any 3D keynotes anytime soon, but some of the interesting pieces here were the potential metrics that could be obtained from this sort of in-store demo. The example given showed a heatmap textured to the object with the “hotter” areas offering metrics of where users stared at the most. This was an interesting take on the heatmap sometimes generated for 2D screen user interfaces.
Lastly, to drive home the true power of the HoloLens to put anything into any space, the demo ended by filling the room with a model of the solar system. The concept here was that this tool could be used for educational purposes, using special dimensions to illustrate harder to grasp concepts. Being able to see thousands of tiny asteroids floating past your nose in 3D is just about one of the coolest things the most 'nerdy' of mindsets can experience.
The device itself has a pair of small glasses inside of it, which is where the image - the holograph - actually shows up. This is between the user’s eyes and the gray-tinted visor that shows up in most of the promotional material. Unfortunately, the display itself does not incorporate your whole point of view, so you are left looking at a noticeably rectangular viewport. Due to objects getting cropped to this space, looking at large scale or close up items requires a lot of head movement. It doesn’t feel as much as if you’re actually surrounded by this mixed reality world, but instead that there is a hidden world around you made visible through this small rectangular looking hole. It’s like you’re peeking into another dimension, one that's layered directly on ours, but is invisible to the naked eye.
The rest of the software and hardware itself - from the projection in front of you, to the frame rate of the video and spatial tracking and mapping - does make it feel like everything is actually there in the room, just out of reach. The device, while not the most comfortable thing to wear (also considering I need to wear glasses to see anything in general), didn’t feel overly bulky or that awkward. It’s not something I’d be thrilled to wear for 8 hours straight, but I’m sure that can be improved. It's also important to note the device is fully self-encapsulated, there are no wires connected to anything so you can wander around at free will. I was always near a PC (or Surface) so I am unsure of how much internal processing is on the device versus a nearby workstation connected via Bluetooth.
Either way, for something that is more or less still very much in development, it’s super impressive.
Microsoft will be releasing development kits for purchase to selected developers in the coming months, so hopefully we’ll be lucky enough to snag one or two. The technology is young, and there’s a lot of great, untapped potential with a device like this. I’m excited to see what we’ll come up with.
Eric Decker is VP of Technology at Firstborn