Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Tuesday July 25 2017, @01:03PM   Printer-friendly
from the overlords-with-hololens dept.

HoloLens 2 can learn!

Microsoft announced that the second generation of the HoloLens' Holographic Processing Unit (HPU) will contain a deep learning accelerator. When Microsoft first unveiled the HoloLens, it said that it comes with a special kind of processor, called an HPU, that can accelerate the kind of "holographic" content displayed by the HMD. The HPU is primarily responsible for processing the information coming from all the on-board sensors, including a custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera.

The first generation HPU contained 24 digital signal processors (DSPs), an Atom processor, 1GB of DDR3 RAM, and 8MB of SRAM cache. The chip can achieve one teraflop per second for under 10W of power, with 40% of that power going to the Atom CPU. The first HPU was built on a 28nm planar process, and if the next-generation HPU will be built on a 14/16nm or smaller FinFET process, the increase in performance could be significant. However, Microsoft has not yet revealed what process node will be used for the next-generation HPU.

What we do know so far about the second-gen HPU is that it will incorporate an accelerator for deep neural networks (DNNs). The deep learning accelerator is designed to work offline and use the HoloLens' battery, which means it should be quite efficient, while still providing significant benefits to Microsoft's machine learning code.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday July 25 2017, @06:33PM (1 child)

    by Anonymous Coward on Tuesday July 25 2017, @06:33PM (#544272)

    > While predictive rendering introduces some glitches, it can be tuned to prevent more errors than it introduces.

    In talking to high end simulator suppliers (the multi million $$ simulators used in F1 and other professional racing), one consistent comment is, "Never give a false cue." I believe this was originally relating to motion cues (moving simulator base), but probably applies to visual and audio cues as well. Better to do nothing (leaving a hole for the brain to fill in?) than to explode (mentally) the simulation with something that is wrong.

  • (Score: 2) by JNCF on Tuesday July 25 2017, @06:47PM

    by JNCF (4317) on Tuesday July 25 2017, @06:47PM (#544281) Journal

    Evolution disagrees.