Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday July 25 2017, @01:03PM   Printer-friendly
from the overlords-with-hololens dept.

HoloLens 2 can learn!

Microsoft announced that the second generation of the HoloLens' Holographic Processing Unit (HPU) will contain a deep learning accelerator. When Microsoft first unveiled the HoloLens, it said that it comes with a special kind of processor, called an HPU, that can accelerate the kind of "holographic" content displayed by the HMD. The HPU is primarily responsible for processing the information coming from all the on-board sensors, including a custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera.

The first generation HPU contained 24 digital signal processors (DSPs), an Atom processor, 1GB of DDR3 RAM, and 8MB of SRAM cache. The chip can achieve one teraflop per second for under 10W of power, with 40% of that power going to the Atom CPU. The first HPU was built on a 28nm planar process, and if the next-generation HPU will be built on a 14/16nm or smaller FinFET process, the increase in performance could be significant. However, Microsoft has not yet revealed what process node will be used for the next-generation HPU.

What we do know so far about the second-gen HPU is that it will incorporate an accelerator for deep neural networks (DNNs). The deep learning accelerator is designed to work offline and use the HoloLens' battery, which means it should be quite efficient, while still providing significant benefits to Microsoft's machine learning code.


Original Submission

Related Stories

Microsoft Announces $3,500 HoloLens 2 With Wider Field of View and Other Improvements 5 comments

Microsoft Reveals HoloLens 2 with More than 2x Field of View & 47 Pixels per-Degree

Microsoft today revealed HoloLens 2 at MWC 2019 in Barcelona. The headset features a laser-scanning display which brings a field of view that's more than 2x the original HoloLens and 47 pixels per degree.

HoloLens visionary Alex Kipman took to the stage in Barcelona to introduce HoloLens 2 which addresses many of the key criticisms of the original headset: field of view, comfort, and hand-tracking.

Kipman says that HoloLens 2 "more than doubles" the field of view of the original HoloLens, though hasn't yet specified exactly what the field of view is. The original HoloLens field of view was around 35 degrees, so HoloLens 2 is expected to be around 70 degrees.

[...] HoloLens 2 is also designed to be more comfortable, with much of the headset's bulk balanced in the back of the headset. Kipman said HoloLens 2 "more than triples the comfort" over the original HoloLens... though the exact weight, and how they came to that specific figure, is unclear. Still, the front portion of the headset is said to be made entirely from carbon fiber to cut down on weight and offers a convenient flip-up visor.

HoloLens 2 also brings hand-tracking which goes much further than the coarse gesture control in the original headset. Now with full hand-tracking, users can interact much more directly with applications by touching, poking, and sliding controls directly rather than using abstract gestures.

Also at Engadget.

See also: HoloLens 2 Specs Reveal 2–3 Hour 'Active' Battery Life, Optional Top Strap, & More
Mozilla is bringing Firefox to Microsoft's HoloLens 2

Previously: HoloLens - Microsoft's Augmented Reality Product
Microsoft Giving $500,000 to Academia to Develop HoloLens Apps
Microsoft Announces Surface Pro 4, Surface Book, and HoloLens Dev Edition
Microsoft HoloLens and its 24-Core Chip
HoloLens 2 to Include Machine Learning Accelerated Hardware
Ford Using Microsoft HoloLens to Help Design Cars
Leaked Microsoft Documents Describe Plans for Surface Tablets, Xbox, "Andromeda", and HoloLens
HoloLens to Assist Surgeons at UK's Alder Hey Children's Hospital
U.S. Army Awards Microsoft a $480 Million HoloLens Contract


Original Submission

Microsoft Kills Kinect 10 comments

Microsoft kills off Kinect, stops manufacturing it

Microsoft is finally admitting Kinect is truly dead. After years of debate over whether Kinect is truly dead or not, the software giant has now stopped manufacturing the accessory. Fast Co Design reports that the depth camera and microphone accessory has sold around 35 million units since its debut in November, 2010. Microsoft's Kinect for Xbox 360 even became the fastest-selling consumer device back in 2011, winning recognition from Guinness World Records at the time.

In the years since its debut on Xbox 360, a community built up around Microsoft's Kinect. It was popular among hackers looking to create experiences that tracked body movement and sensed depth. Microsoft even tried to bring Kinect even more mainstream with the Xbox One, but the pricing and features failed to live up to expectations. Microsoft was then forced to unbundle Kinect from Xbox One, and produced an unsightly accessory to attach the Kinect to the Xbox One S. After early promise, Kinect picked up a bad name for itself.

Kinect technology lives on in products such as HoloLens, Windows Hello cameras, and "Mixed Reality" headsets.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday July 25 2017, @01:34PM (4 children)

    by Anonymous Coward on Tuesday July 25 2017, @01:34PM (#544159)

    I'm not sure what they plan to do with this... it will be too slow to train unless it is very small. I guess a lot of these techniques were devised on even slower devices though (but then you would wait weeks for a result).

    • (Score: 2) by takyon on Tuesday July 25 2017, @02:10PM

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday July 25 2017, @02:10PM (#544166) Journal

      That is the amount for the first generation device, not the second (amount unknown). HoloLens 1 actually has 2 GB of RAM total. 1 GB is dedicated to the "holographic processing unit".

      https://en.wikipedia.org/wiki/Microsoft_HoloLens [wikipedia.org]

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by Nerdfest on Tuesday July 25 2017, @03:38PM (1 child)

      by Nerdfest (80) on Tuesday July 25 2017, @03:38PM (#544194)

      What they plan to do? Based on recent experience .... "telemetry".

      • (Score: 2) by Gaaark on Tuesday July 25 2017, @04:34PM

        by Gaaark (41) Subscriber Badge on Tuesday July 25 2017, @04:34PM (#544215) Journal

        Based on my 'ancient' experience.... "BSO telemetry".
        XD

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 0) by Anonymous Coward on Tuesday July 25 2017, @04:44PM

      by Anonymous Coward on Tuesday July 25 2017, @04:44PM (#544219)

      Maybe it's good for recognizing squirrels? (yesterday's article about using a Pi for scene recognition)

  • (Score: 2) by ledow on Tuesday July 25 2017, @04:45PM (3 children)

    by ledow (5567) on Tuesday July 25 2017, @04:45PM (#544222) Homepage

    So they stuck "AI" in a VR headset.

    I mean, yeah, it's a buzzword search-term grabber.

    But really? What's the point? What does it "learn"? (P.S. it doesn't, even the "deep learning" / "machine learning" stuff is still just snake-oil, especially on this scale).

    • (Score: 3, Interesting) by JNCF on Tuesday July 25 2017, @06:21PM (2 children)

      by JNCF (4317) on Tuesday July 25 2017, @06:21PM (#544269) Journal

      But really? What's the point? What does it "learn"?

      My first thought was predictive rendering. We experience lag even without augmented reality. Our brains try to adjust for this, sometimes with mixed results. [wikipedia.org] Some modern online games also try to predict how their worlds should be rendered by anticipating the actions of distant players (not just assuming a continuation of the last known input), rendering the world as if those actions happened, and then rolling back the changes if those predictions were wrong. See GGPO, for example. [wikipedia.org] Given that even a teeny bit of lag would be annoying in augmented reality, I could see wanting to predict eye and head movement based on previous experiences with a given player. If 98%* of the time I glance left and then right in a given time interval I then proceed to glance left again, it might be helpful to just assume I'm going to do that and roll back the changes if necessary. While predictive rendering introduces some glitches, it can be tuned to prevent more errors than it introduces.

      *Number pulled directly from ass.

      • (Score: 0) by Anonymous Coward on Tuesday July 25 2017, @06:33PM (1 child)

        by Anonymous Coward on Tuesday July 25 2017, @06:33PM (#544272)

        > While predictive rendering introduces some glitches, it can be tuned to prevent more errors than it introduces.

        In talking to high end simulator suppliers (the multi million $$ simulators used in F1 and other professional racing), one consistent comment is, "Never give a false cue." I believe this was originally relating to motion cues (moving simulator base), but probably applies to visual and audio cues as well. Better to do nothing (leaving a hole for the brain to fill in?) than to explode (mentally) the simulation with something that is wrong.

  • (Score: 2) by requerdanos on Wednesday July 26 2017, @01:50AM (1 child)

    by requerdanos (5997) Subscriber Badge on Wednesday July 26 2017, @01:50AM (#544419) Journal

    one teraflop per second

    One Tera Floating Point Operations Per Second per second?

    I know that idiotic nonsense like "PIN Number" and "ATM Machine" redundancize the world already, but flops per second is not a welcome addition.

    • (Score: 0) by Anonymous Coward on Wednesday July 26 2017, @06:13AM

      by Anonymous Coward on Wednesday July 26 2017, @06:13AM (#544506)

      It's the Amazon AWS growth rate.

(1)