Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by Fnord666 on Wednesday August 30 2017, @12:46AM   Printer-friendly
from the toast-anyone? dept.

Intel has announced a Vision Processing Unit SoC named after the visualization processing company it bought last year:

Intel today introduced the Movidius Myriad X Vision Processing Unit (VPU) which Intel is calling the first vision processing system-on-a-chip (SoC) with a dedicated neural compute engine to accelerate deep neural network inferencing at the network edge. You may recall Intel acquired Movidius roughly a year ago for its visualization processing expertise. Introduction of the SoC closely follows release of the Movidius Neural Compute Stick in July, a USB-based offering "to make deep learning application development on specialized hardware even more widely available."

Intel says the VPU's new neural compute engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power. "With the introduction of the Neural Compute Engine, the Myriad X architecture is capable of 1 TOPS – trillion operations per second based on peak floating-point computational throughput of Neural Compute Engine – of compute performance on deep neural network inferences," says Intel.

Commenting on the introduction Steve Conway of Hyperion Research said, "The Intel VPU is an essential part of the company's larger strategy for deep learning and other AI methodologies. HPC has moved to the forefront of R&D for AI, and visual processing complements Intel's HPC strategy. In the coming era of autonomous vehicles and networked traffic, along with millions of drones and IoT sensors, ultrafast visual processing will be indispensable."

Intel reports the new Movidius SoC VPU is capable of delivering more than 4 TOPS of total performance and that its tiny form factor and on-board processing are ideal for autonomous device solutions.

The device uses up to 1.5 W of power.

Also at Tom's Hardware and AnandTech.


Original Submission

Related Stories

Huawei Kirin 970 SoC to Include Dedicated "Neural Processing Unit" 7 comments

http://www.anandtech.com/show/11804/huawei-shows-unannounced-kirin-970-at-ifa-2017-dedicated-neural-processing-unit

The headline that Huawei seems to want to promote is the addition of dedicated neural network silicon inside the Kirin 970, dubbed the Neural Processing Unit (NPU). The sticker performance of the NPU is rated at 1.92 TFLOPs of FP16, which for reference, is about 3x what the Kirin 960's GPU alone can do on paper (~0.6 TFLOPs FP16). Or to put this in practical terms, Huawei says that the NPU is capable of discerning 2005 images per minute from internal testing, compared to 97 images per minute without the NPU – and presumably on the CPU – using the Kirin Thundersoft software (likely a future brand name). Obviously, depending on the implementation and power use, I would expect Huawei to try and leverage the NPU as much as possible in upcoming designs.

Other details for the Kirin 970 show improvements over the Kirin 960. First is the movement to TSMC's 10nm process, from 16FF+. The Kirin 960 launched a few months before the 10nm ramp up for other high-end smartphone SoCs hit the shelves, so Huawei is matching their competitors here. The core configuration is the same as the 960, with four ARM Cortex A73 cores and four ARM Cortex A53 cores, this time clocked at 2.4 GHz and 1.8 GHz respectively. The integrated graphics is the newest Mali G72, announced alongside the A75/A55 processors earlier this year, which will be in an MP12 configuration. Frequency was not listed.

[...] Huawei's final declarations on the NPU state that it is 25x the performance of a CPU with 50x the energy efficiency, and using a new HiAI (Hi-Silicon AI) nomenclature.

I'm waiting for the smartphone that packs in a central processing unit, graphics processing unit, neural processing unit, and quantum processing unit.

Related: Snapdragon 820 SoC's Zeroth Neuromorphic Chip to Block Malware on Smartphones
Intel Announces Movidius Myriad X Vision Processing Unit


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by Snotnose on Wednesday August 30 2017, @01:40AM (3 children)

    by Snotnose (1623) on Wednesday August 30 2017, @01:40AM (#561258)

    I'm killing n00bs in MW2, how exactly does this improve my K/D ratio?

    --
    When the dust settled America realized it was saved by a porn star.
    • (Score: 3, Funny) by takyon on Wednesday August 30 2017, @01:54AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday August 30 2017, @01:54AM (#561261) Journal

      Maybe you could make a really good aimbot with it.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Wednesday August 30 2017, @02:08AM

      by Anonymous Coward on Wednesday August 30 2017, @02:08AM (#561262)

      Whoa! Man! Are you a Crusader or a Warden?

    • (Score: 2, Interesting) by Anonymous Coward on Wednesday August 30 2017, @03:51AM

      by Anonymous Coward on Wednesday August 30 2017, @03:51AM (#561295)

      It doesn't help worth shit under any circumstances.

      These things are so proprietary you'd think Imagination Technologies was producing them, instead of Intel.

      There is no opcode documentation. The firmware required to operate them is signed. The sdk required to work with them is wholly proprietary. And much like the Xeon Phi availability of models you would actually use as more than a toy are on an OEM basis only.

  • (Score: -1, Flamebait) by Anonymous Coward on Wednesday August 30 2017, @04:08AM (3 children)

    by Anonymous Coward on Wednesday August 30 2017, @04:08AM (#561306)

    How many times can you repeat "deep" "neural" "learning" without actually spelling out how it's actually cooked up? Deep fucking bullshit.

    • (Score: -1, Flamebait) by Anonymous Coward on Wednesday August 30 2017, @04:09AM (2 children)

      by Anonymous Coward on Wednesday August 30 2017, @04:09AM (#561308)

      Deep-seated anger issues lol.

      • (Score: -1, Flamebait) by Anonymous Coward on Wednesday August 30 2017, @04:12AM (1 child)

        by Anonymous Coward on Wednesday August 30 2017, @04:12AM (#561310)

        Lol this, dipshit cunt.

        • (Score: -1, Flamebait) by Anonymous Coward on Wednesday August 30 2017, @05:08AM

          by Anonymous Coward on Wednesday August 30 2017, @05:08AM (#561320)

          No no, must be 'deep-shit cunt' to keep to the thread trend.

  • (Score: 0) by Anonymous Coward on Wednesday August 30 2017, @07:30PM

    by Anonymous Coward on Wednesday August 30 2017, @07:30PM (#561663)

    I have worked a bit with their previous Myriad 2 architecture and it was quite nice and very energy efficient actually, I'm wondering what this one will bring but I guess that only time can tell.

  • (Score: 2) by FatPhil on Wednesday August 30 2017, @09:41PM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday August 30 2017, @09:41PM (#561760) Homepage
    Isn't this the third neural network based processor that's been announced this year?
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(1)