Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday May 12 2017, @10:02AM   Printer-friendly
from the render-farm++ dept.

NVIDIA has detailed the full GV100 GPU as well as the first product based on the GPU, the Tesla V100:

The Volta GV100 GPU uses the 12nm TSMC FFN process, has over 21 billion transistors, and is designed for deep learning applications. We're talking about an 815mm2 die here, which pushes the limits of TSMC's current capabilities. Nvidia said it's not possible to build a larger GPU on the current process technology. The GP100 was the largest GPU that Nvidia ever produced before the GV100. It took up a 610mm2 surface area and housed 15.3 billion transistors. The GV100 is more than 30% larger.

Volta's full GV100 GPU sports 84 SMs (each SM [streaming multiprocessor] features four texture units, 64 FP32 cores, 64 INT32 cores, 32 FP64 cores) fed by 128KB of shared L1 cache per SM that can be configured to varying texture cache and shared memory ratios. The GP100 featured 60 SMs and a total of 3840 CUDA cores. The Volta SMs also feature a new type of core that specializes in Tensor deep learning 4x4 Matrix operations. The GV100 contains eight Tensor cores per SM and deliver a total of 120 TFLOPS for training and inference operations. To save you some math, this brings the full GV100 GPU to an impressive 5,376 FP32 and INT32 cores, 2688 FP64 cores, and 336 texture units.

[...] GV100 also features four HBM2 memory emplacements, like GP100, with each stack controlled by a pair of memory controllers. Speaking of which, there are eight 512-bit memory controllers (giving this GPU a total memory bus width of 4,096-bit). Each memory controller is attached to 768KB of L2 cache, for a total of 6MB of L2 cache (vs 4MB for Pascal).

The Tesla V100 has 16 GB of HBM2 memory with 900 GB/s of memory bandwidth. NVLink interconnect bandwidth has been increased to 300 GB/s.

Note the "120 TFLOPS" for machine learning operations. Microsoft is "doubling down" on AI, and NVIDIA's sales to data centers have tripled in a year. Sales of automotive-oriented GPUs (more machine learning) also increased.

IBM Unveils New AI Software, Will Support Nvidia Volta

Also at AnandTech and HPCWire.


Original Submission

Related Stories

Google's New TPUs are Now Much Faster -- will be Made Available to Researchers 20 comments

Google's machine learning oriented chips have gotten an upgrade:

At Google I/O 2017, Google announced its next-generation machine learning chip, called the "Cloud TPU." The new TPU no longer does only inference--now it can also train neural networks.

[...] In last month's paper, Google hinted that a next-generation TPU could be significantly faster if certain modifications were made. The Cloud TPU seems to have have received some of those improvements. It's now much faster, and it can also do floating-point computation, which means it's suitable for training neural networks, too.

According to Google, the chip can achieve 180 teraflops of floating-point performance, which is six times more than Nvidia's latest Tesla V100 accelerator for FP16 half-precision computation. Even when compared against Nvidia's "Tensor Core" performance, the Cloud TPU is still 50% faster.

[...] Google will also donate access to 1,000 Cloud TPUs to top researchers under the TensorFlow Research Cloud program to see what people do with them.

Also at EETimes and Google.

Previously: Google Reveals Homegrown "TPU" For Machine Learning
Google Pulls Back the Covers on Its First Machine Learning Chip
Nvidia Compares Google's TPUs to the Tesla P40
NVIDIA's Volta Architecture Unveiled: GV100 and Tesla V100


Original Submission

Nvidia Announces Ampere GPU Architecture and A100 GPU 3 comments

NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator

Like the Volta reveal 3 years ago – and is now traditional for NVIDIA GTC reveals – today's focus is on the very high end of the market. In 2017 NVIDIA launched the Volta-based GV100 GPU, and with it the V100 accelerator. V100 was a massive success for the company, greatly expanding their datacenter business on the back of the Volta architecture's novel tensor cores and sheer brute force that can only be provided by a 800mm2+ GPU. Now in 2020, the company is looking to continue that growth with Volta's successor, the Ampere architecture.

[...] Designed to be the successor to the V100 accelerator, the A100 aims just as high, just as we'd expect from NVIDIA's new flagship accelerator for compute. The leading Ampere part is built on TSMC's 7nm process and incorporates a whopping 54 billion transistors, 2.5x as many as the V100 before it. NVIDIA has put the full density improvements offered by the 7nm process in use, and then some, as the resulting GPU die is 826mm2 in size, even larger than the GV100. NVIDIA went big on the last generation, and in order to top themselves they've gone even bigger this generation.

We'll touch more on the individual specifications a bit later, but at a high level it's clear that NVIDIA has invested more in some areas than others. FP32 performance is, on paper, only modestly improved from the V100. Meanwhile tensor performance is greatly improved – almost 2.5x for FP16 tensors – and NVIDIA has greatly expanded the formats that can be used with INT8/4 support, as well as a new FP32-ish format called TF32. Memory bandwidth is also significantly expected[sic], with multiple stacks of HBM2 memory delivering a total of 1.6TB/second of bandwidth to feed the beast that is Ampere.

See also: Nvidia's first Ampere GPU is designed for data centers and AI, not your PC
Nvidia unveils Ampere GPU architecture for AI boost, and the first target is coronavirus

Previously: NVIDIA's Volta Architecture Unveiled: GV100 and Tesla V100


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Friday May 12 2017, @12:00PM (2 children)

    by Anonymous Coward on Friday May 12 2017, @12:00PM (#508569)

    4 of these things, in a nice box with 512gb-1tb ram. A perfect node to add to one's home network. Too bad i can't afford it.

    • (Score: 0) by Anonymous Coward on Friday May 12 2017, @08:42PM (1 child)

      by Anonymous Coward on Friday May 12 2017, @08:42PM (#508844)

      WTF are you doing that you need 4 of these monsters???

      • (Score: 2) by bob_super on Friday May 12 2017, @09:24PM

        by bob_super (1357) on Friday May 12 2017, @09:24PM (#508859)

        Living in a cold cold cold place would be my first guess. With cheap power.

        But we'll be happy to get crushed by that setup when it gets added to SN's folding@home team [stanford.edu]...

(1)