We've been reviewing the HoloLens at Concurrency and assessing it's capabilities both from development and usability. One of the things I thought would be helpful for people to see, is a couple videos of the HoloLens in action from a first person point of view.
In the first video, you can see how a user goes about creating a 3D mesh of an environment. Once a 3D mesh is created, the HoloLens can figure out where it is and differentiate one room from another. It will essentially treat the mesh like a fingerprint of a particular space. In addition, once an environment is meshed, holograms can be placed within it and will remain there until moved. In this video, I simply use the air-tap gesture to instruct the HoloLens to initiate the mesh action. I also leverage Cortana launch the Edge browser and navigate to Concurrency.com, all without using a holographic keyboard.
In the second video, you can see how to interact with a simple hologram. In this case, I have a cat sitting on a chair. Using your head to aim a small dot pointer (think mouse pointer), you can engage the diagram using the air-tap gesture again.
Hopefully, this small blog post has been helpful in showing you what an actual 1st-person experience looks like. These videos were captured by the HoloLen's native recording capabilties.