Google's machine learning oriented chips have gotten an upgrade [tomshardware.com]:
At Google I/O 2017, Google announced its next-generation machine learning chip, called the "Cloud TPU." The new TPU no longer does only inference--now it can also train neural networks.
[...] In last month's paper [google.com], Google hinted that a next-generation TPU could be significantly faster if certain modifications were made. The Cloud TPU seems to have have received some of those improvements. It's now both much faster capable of floating-point computation, which means it's suitable for training neural networks, too.
According to Google, the chip can achieve 180 teraflops of floating-point performance, which is six times more than Nvidia's latest Tesla V100 [tomshardware.com] accelerator for FP16 half-precision computation. Even when compared against Nvidia's "Tensor Core" [tomshardware.com] performance, the Cloud TPU is still 50% faster.
[...] Google will also donate access to 1,000 Cloud TPUs to top researchers under the TensorFlow Research Cloud [tensorflow.org] program to see what people do with them.
Also at EETimes [eetimes.com] and Google [blog.google].
Previously: Google Reveals Homegrown "TPU" For Machine Learning [soylentnews.org]
Google Pulls Back the Covers on Its First Machine Learning Chip [soylentnews.org]
Nvidia Compares Google's TPUs to the Tesla P40 [soylentnews.org]