Microsoft’s HoloLens is still such new tech that even XR enthusiasts have rarely set eyes ‘in’ it, but work is already well underway on the follow-up hardware by the Redmond, Washington firm.
AI has been acknowledged as central to AR’s – sorry Microsoft, MR’s – success by competitors, Facebook and Google, but MS has revealed that HoloLens 2.0 will incorporate a coprocessor dedicated to implementing Deep Neural Networks (DNN) for AI.
Speaking at the computer vision event, CVPR 2017, Harry Shum, Executive Vice President of Microsoft’s Artificial Intelligence and Research Group, announced that the second version of the HoloLens’ custom multiprocessor - the Holographic Processing Unit (HPU) - will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of fully programmable layer types.
At the Hawaii event, Shum demonstrated an early build of the next gen HoloLens HPU running live code implementing hand segmentation.
Marc Pollefeys, Director of Science for HoloLens, offered more information in a research blog post, saying that deep learning faces two challenges in that it requires large amounts of labelled data for training and a type of compute that is not amenable to current general purpose processor/memory architectures.
Pollefeys dismisses the Field-Programmable Gate Array (FPGA) saying that, “These approaches have primarily enhanced existing cloud computing fabrics... Any compute we want to run locally for low-latency, which you need for things like hand-tracking”.
The answer? If you’re Microsoft, create custom silicon:
The AI coprocessor is designed to run continuously off the HoloLens battery and Pollefeys said, “This is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent.” He also observed that, “Mixed Reality and Artificial Intelligence represent the future of computing”.