Yesterday I was lucky enough to get a chance to try out the much vaunted Hololens, which is a completely new device from Microsoft which provides “Augmented Reality” as opposed to the Virtual Reality devices like Oculus Rift provides. The wording may be subtle, but the difference is quite large. Augmented reality projects objects into the actual room you are in, rather than moving you into an entirely different world like VR does.

Hololens is quite the device. It can track the physical space around you very well, and unlike VR, there is no requirement for markers in the room or extra cameras to track movement in the room. It is completely self-contained, and that may be one of its biggest wins of all.

The device on hand was near-final hardware, and the actual device looked exactly like what has been promised for some time. Although we did not get a chance to see the preview device in January, it was apparently nothing like what was presented at Build this week.

However just like January, we were not allowed to take any actual photos of the demonstration units, and all interaction with the device required us to lock our belongings in a locker before we could enter the room. They did however have a unit on display under glass for photo opportunities.

Let’s start with what they got right. Interacting with Hololens is very easy. There were only a couple of commands needed, and things like the air tap were very simple to use, and not once did I get a missed reading. That is extremely impressive considering it is just seeing my finger move in free space. When you want to interact with something, there is almost a cursor in the center of your field of view that you just focus onto an object. The object is then highlighted, and there will be no mistaking which object you are going to interact with.

Another interaction method was using a mouse, and when looking at a PC, you can simply drag the mouse off the display and the mouse cursor just moves into free space. In my demo, which was based on an architecture theme, this allowed me to interact with the model and move walls around, and change the design.

Another cool feature was the ability to leave virtual notes. Looking at a wall, I could see that someone had left me a note, and with a simple air tap I was able to hear what they had left. Then I could leave a note of my own on the wall for that person to see later.

Another win was the actual device itself. You put it on somewhat like a welding mask, and you just tighten the band on the back of your head with a wheel. Hopefully the durability of the devices is fairly robust, because we were helped out quite a bit to get the device on and off, but that kind of makes sense with the volume of people coming through the demo.

So what did it not deliver? The actual holograms had a very limited field of view. With the demos we had seen on the keynote, you could see holograms all around you, but the actual experience was nothing like that. Directly in front of you was a small box, and you could only see things in that box, which means that there is a lot of head turning to see what’s going on. On my construction demo they provided, I was supposed to look at a virtual “Richard” and I was asked if I see Richard. I did not. There was a bug with Richard and he was laying on the floor stuck through a wall. I understand these demos can have bugs, but it was very hard to find where he was with the limited field of view.

This demo is almost nothing like what you actually see in the device

The holograms themselves were very good, but they were so limited in scope that I can only hope that some work can be done there before the device goes for sale. There is a tremendous opportunity here and it would be awful for it to be spoiled by poor hardware. Although I didn’t get a chance to see the January demo, I’m told by several people who did that the field of view was a lot better on those units.

So my expectations were not met, and I can attribute that to the demos that were provided online and during the keynote. The actual experience was almost nothing like that, and what was shown on stage was amazing.

One thing that I wanted to know was what kind of hardware is inside, but there were zero answers to that right now. The device itself looked good, it felt good, the audio was good, but the main attraction still leaves a lot to be desired.

POST A COMMENT

33 Comments

View All Comments

  • Frenetic Pony - Friday, May 1, 2015 - link

    It's not, it's from resolution. Tiny displays like this, it sounds like they're using a projector type, can't get to very high resolutions versus how close they are to your face. Oh they're fine today for phones, but imagine holding a phone 2 inches from your face and even if you've got a 1440p display you'll still be seeing a screen door effect.

    By narrowing the field of view, you are essentially doing the same thing as shrinking the phones display size in the above example even though your keeping the resolution the same. It's a tradeoff that has to be made currently between actually looking like anything beyond an 80's 240x180 display and having a field of view you can actually use.
    Reply
  • edzieba - Sunday, May 3, 2015 - link

    It's almost certainly a physical constraint. Going by Oliver Kreylos' description of the Hololens (http://doc-ok.org/?p=1223 https://www.reddit.com/r/oculus/comments/34k7pn/re... it is using a pretty standard LCOS/DMD light modulator - point illuminator (colour LED) + lightguide (in this case the source of the 'holographic' terminology due to to the use of diffraction gratings rather than pure total-internal-reflection). These modules are common with industrial HMDs and HUDs, though generally tethered to a stationary external machine of a laptop-in-a-backpack, and with a more robust external tracking system like a Flock Of Birds magnetic tracker or a multiple camera tracking rig.

    This small FoV is a fundamental limitation of compact microdisplay optics at the moment. A larger FoV will require fundamentally different optics, e.g. PinLight displays or metamaterial lenses. Microsoft will need to either wait for an optics module producer to start manufacturing displays using these new technologies (once they have matured) or actually take design and manufacture in-house to develop them.
    Reply
  • edzieba - Sunday, May 3, 2015 - link

    That second link should be: https://www.reddit.com/r/oculus/comments/34k7pn/re... Reply
  • jjj - Friday, May 1, 2015 - link

    Thanks for a more realistic take than what most of the press is going with.The first time they showed it,it was really hard to get the real info too.
    So it's a Google Glass for both eyes with Project Tango and some interesting software.
    It's way bulkier and for inside only so in practice it will be hard to find what to use it for, unless the field of view gets wide.Pricing will be tricky too.
    I really like that they are trying and they did some positive things but, as it is, it will be hard to market. Hopefully it gets better and cheaper fast. It's also good that it forces Google and others to keep investing in glasses.
    If they widen the field of view and they add the capability for the cover "lens" to change opacity, it would be a much much bigger deal and hopefully doable at least for the second gen hardware. Would be , i think, the first to unify VR and AR.
    Was curious about the hardware too, what kind of projection tech, how many cams and so on.
    Reply
  • jjj - Friday, May 1, 2015 - link

    This also shows how Oculus might find itself years behind because they went with delays after delays and external hardware.
    For FB every month of delay might end up costing them billions. Instead of having a huge user base and being the default by now,they might end up being far behind by next year and having to waste a few years to try to catch up.
    For Google on the OS side it reminds them that there is a new OS race and that not trying much harder to scale Android on all screen sizes was a huge mistake.Android not being a competitor on the PC side gives M$ a significant advantage.
    Reply
  • Krysto - Thursday, May 7, 2015 - link

    Oculus just announce shipping Q12016. Reply
  • uhuznaa - Friday, May 1, 2015 - link

    Well, it's a bit like Google Glass in 3D that projects things into your field of view. Not too bad actually. And I guess it will have a better resolution than the 640x360 pixels of Glass. It just looks more like an actual product you could buy. Reply
  • jjj - Friday, May 1, 2015 - link

    Yeah i said in my first post that it's like a dual Glass but it is in no way more like an actual product,it's far worse from that point of view.
    Glass was lot more discrete , this is way bulkier and heavier so it's pretty much inside only.
    Glass had a bunch of scenarios where it was useful, this has none and making it costs a lot more.
    Glass or Oculus can actually be sold in high volume, this not at all really.
    Ofc the final goal is to converge VR and AR and from that point of view M$ is closer to it. But that's for future generations of hardware in 1-2 years hopefully while this version is just a cool toy nobody needs.
    Reply
  • Alexvrb - Friday, May 1, 2015 - link

    Glass is really nothing like this. Glass doesn't project AR images into your view (like adding virtual objects to a real environment, or floating AR windows). Glass doesn't watch your hand movements for manipulating said AR objects. Saying it's "dual Glass" is drivel. Reply
  • jjj - Saturday, May 2, 2015 - link

    That's Project Tango and Kinect added to it and both were on track to be used in a consumer version.
    And it is just like Glass but for both eyes and in the the center of your view line.You might have read overexcited articles from other sources and gotten the wrong idea about what this is.
    It can be great but in a few years not now.
    Reply

Log in

Don't have an account? Sign up now