Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday April 06 2016, @01:49AM   Printer-friendly
from the that's-a-lot-of-horsepower dept.

During a keynote at GTC 2016, Nvidia announced the Tesla P100, a 16nm FinFET Pascal graphics processing unit with 15.3 billion transistors intended for high performance and cloud computing customers. The GPU includes 16 GB of High Bandwidth Memory 2.0 with 720 GB/s of memory bandwidth and a unified memory architecture. It also uses the proprietary NVLink, an interconnect with 160 GB/s of bandwidth, rather than the slower PCI-Express.

Nvidia claims the Tesla P100 will reach 5.3 teraflops of FP64 (double precision) performance, along with 10.6 teraflops of FP32 and 21.2 teraflops of FP16. 3584 of a maximum possible 3840 stream processors are enabled on this version of the GP100 die.

At the keynote, Nvidia also announced a 170 teraflops (FP16) "deep learning supercomputer" or "datacenter in a box" called DGX-1. It contains eight Tesla P100s and will cost $129,000. The first units will be going to research institutions such as the Massachusetts General Hospital.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Funny) by Anonymous Coward on Wednesday April 06 2016, @12:01PM

    by Anonymous Coward on Wednesday April 06 2016, @12:01PM (#328020)

    Hmm. PETA is already a flop. So we have at least one flopped PETA. It's a good start.

    Starting Score:    0  points
    Moderation   +1  
       Funny=1, Total=1
    Extra 'Funny' Modifier   0  

    Total Score:   1