Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday May 19 2017, @12:34AM   Printer-friendly
from the Are-you-thinking-what-I'm-thinking? dept.

Google's machine learning oriented chips have gotten an upgrade:

At Google I/O 2017, Google announced its next-generation machine learning chip, called the "Cloud TPU." The new TPU no longer does only inference--now it can also train neural networks.

[...] In last month's paper, Google hinted that a next-generation TPU could be significantly faster if certain modifications were made. The Cloud TPU seems to have have received some of those improvements. It's now much faster, and it can also do floating-point computation, which means it's suitable for training neural networks, too.

According to Google, the chip can achieve 180 teraflops of floating-point performance, which is six times more than Nvidia's latest Tesla V100 accelerator for FP16 half-precision computation. Even when compared against Nvidia's "Tensor Core" performance, the Cloud TPU is still 50% faster.

[...] Google will also donate access to 1,000 Cloud TPUs to top researchers under the TensorFlow Research Cloud program to see what people do with them.

Also at EETimes and Google.

Previously: Google Reveals Homegrown "TPU" For Machine Learning
Google Pulls Back the Covers on Its First Machine Learning Chip
Nvidia Compares Google's TPUs to the Tesla P40
NVIDIA's Volta Architecture Unveiled: GV100 and Tesla V100


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Friday May 19 2017, @05:10PM (1 child)

    by HiThere (866) Subscriber Badge on Friday May 19 2017, @05:10PM (#512258) Journal

    It depends on which estimate you use. We don't even have nearly an order of magnitude of that number. Particularly if you allow the exclusion of parts of the brain that are dedicated to, e.g., handling blood chemistry. And particularly if you include speculation that some quantum effects happen in thought.

    In fact, the entire basis of thought isn't really understood, so flops might be a poor way to simulate it. Perhaps integer arithmetic is better. Or fixed point. That flops are important is due to the selected algorithm, and I'm really dubious about it. That said, this doesn't imply that the current "deep learning" approach won't work. It's just that you can't assume that its computational requirements will be equivalent. They could also be either much higher or much lower.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by kaszz on Friday May 19 2017, @05:27PM

    by kaszz (4211) on Friday May 19 2017, @05:27PM (#512270) Journal

    Well now that the capacity becomes available. Maybe it will enable research to find out?