Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Friday May 12, @10:02AM   Printer-friendly
from the render-farm++ dept.

NVIDIA has detailed the full GV100 GPU as well as the first product based on the GPU, the Tesla V100:

The Volta GV100 GPU uses the 12nm TSMC FFN process, has over 21 billion transistors, and is designed for deep learning applications. We're talking about an 815mm2 die here, which pushes the limits of TSMC's current capabilities. Nvidia said it's not possible to build a larger GPU on the current process technology. The GP100 was the largest GPU that Nvidia ever produced before the GV100. It took up a 610mm2 surface area and housed 15.3 billion transistors. The GV100 is more than 30% larger.

Volta's full GV100 GPU sports 84 SMs (each SM [streaming multiprocessor] features four texture units, 64 FP32 cores, 64 INT32 cores, 32 FP64 cores) fed by 128KB of shared L1 cache per SM that can be configured to varying texture cache and shared memory ratios. The GP100 featured 60 SMs and a total of 3840 CUDA cores. The Volta SMs also feature a new type of core that specializes in Tensor deep learning 4x4 Matrix operations. The GV100 contains eight Tensor cores per SM and deliver a total of 120 TFLOPS for training and inference operations. To save you some math, this brings the full GV100 GPU to an impressive 5,376 FP32 and INT32 cores, 2688 FP64 cores, and 336 texture units.

[...] GV100 also features four HBM2 memory emplacements, like GP100, with each stack controlled by a pair of memory controllers. Speaking of which, there are eight 512-bit memory controllers (giving this GPU a total memory bus width of 4,096-bit). Each memory controller is attached to 768KB of L2 cache, for a total of 6MB of L2 cache (vs 4MB for Pascal).

The Tesla V100 has 16 GB of HBM2 memory with 900 GB/s of memory bandwidth. NVLink interconnect bandwidth has been increased to 300 GB/s.

Note the "120 TFLOPS" for machine learning operations. Microsoft is "doubling down" on AI, and NVIDIA's sales to data centers have tripled in a year. Sales of automotive-oriented GPUs (more machine learning) also increased.

IBM Unveils New AI Software, Will Support Nvidia Volta

Also at AnandTech and HPCWire.


Original Submission

Related Stories

Google's New TPUs are Now Much Faster -- will be Made Available to Researchers 20 comments

Google's machine learning oriented chips have gotten an upgrade:

At Google I/O 2017, Google announced its next-generation machine learning chip, called the "Cloud TPU." The new TPU no longer does only inference--now it can also train neural networks.

[...] In last month's paper, Google hinted that a next-generation TPU could be significantly faster if certain modifications were made. The Cloud TPU seems to have have received some of those improvements. It's now much faster, and it can also do floating-point computation, which means it's suitable for training neural networks, too.

According to Google, the chip can achieve 180 teraflops of floating-point performance, which is six times more than Nvidia's latest Tesla V100 accelerator for FP16 half-precision computation. Even when compared against Nvidia's "Tensor Core" performance, the Cloud TPU is still 50% faster.

[...] Google will also donate access to 1,000 Cloud TPUs to top researchers under the TensorFlow Research Cloud program to see what people do with them.

Also at EETimes and Google.

Previously: Google Reveals Homegrown "TPU" For Machine Learning
Google Pulls Back the Covers on Its First Machine Learning Chip
Nvidia Compares Google's TPUs to the Tesla P40
NVIDIA's Volta Architecture Unveiled: GV100 and Tesla V100


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough

Mark All as Read

Mark All as Unread

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Friday May 12, @12:00PM (2 children)

    by Anonymous Coward on Friday May 12, @12:00PM (#508569)

    4 of these things, in a nice box with 512gb-1tb ram. A perfect node to add to one's home network. Too bad i can't afford it.

    • (Score: 0) by Anonymous Coward on Friday May 12, @08:42PM (1 child)

      by Anonymous Coward on Friday May 12, @08:42PM (#508844)

      WTF are you doing that you need 4 of these monsters???

      • (Score: 2) by bob_super on Friday May 12, @09:24PM

        by bob_super (1357) on Friday May 12, @09:24PM (#508859)

        Living in a cold cold cold place would be my first guess. With cheap power.

        But we'll be happy to get crushed by that setup when it gets added to SN's folding@home team [stanford.edu]...

(1)