Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Nvidia V100 GPUs and Google TPUv2 Chips Benchmarked; V100 GPUs Now on Google Cloud

Accepted submission by takyon at 2018-05-01 21:03:12
Hardware

RiseML Benchmarks Google TPUv2 against Nvidia V100 GPU [hpcwire.com]

RiseML Blog last week reported benchmarks that suggest Google's custom TPUv2 chips and Nvidia V100 GPUs offer roughly comparable performance on select deep learning tasks but that the cost for access to TPUv2 technology on Google Cloud is less than the cost of accessing V100s on AWS. Google began providing public access to TPUv2 in February via its Cloud TPU offering which includes four TPUv2 chips.

[...] Elmar Haußmann, cofounder and CTO of RiseML, wrote [riseml.com] in the company blog, "In terms of raw performance on ResNet-50, four TPUv2 chips (one Cloud TPU) and four V100 GPUs are equally fast (within 2% of each other) in our benchmarks. We will likely see further optimizations in software (e.g., TensorFlow or CUDA) that improve performance and change this.

Google later announced that it would offer access to Nvidia Tesla V100 GPUs on its Google Cloud Platform [hpcwire.com]:

The cloud giant also announced general availability of Nvidia's previous-generation P100 parts, in public beta on Google's platform since September 2017 [hpcwire.com].

[...] While Google was the first of the big three public cloud providers to embrace [Nvidia Tesla (Pascal)] P100s, it was the last to adopt V100s. Amazon Web Services has offered the Volta parts since October 2017. Microsoft Azure followed with a private preview in November 2017. And IBM brought PCIe variant V100s into its cloud in January of this year.


Original Submission