Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday May 19 2017, @12:34AM   Printer-friendly
from the Are-you-thinking-what-I'm-thinking? dept.

Google's machine learning oriented chips have gotten an upgrade:

At Google I/O 2017, Google announced its next-generation machine learning chip, called the "Cloud TPU." The new TPU no longer does only inference--now it can also train neural networks.

[...] In last month's paper, Google hinted that a next-generation TPU could be significantly faster if certain modifications were made. The Cloud TPU seems to have have received some of those improvements. It's now much faster, and it can also do floating-point computation, which means it's suitable for training neural networks, too.

According to Google, the chip can achieve 180 teraflops of floating-point performance, which is six times more than Nvidia's latest Tesla V100 accelerator for FP16 half-precision computation. Even when compared against Nvidia's "Tensor Core" performance, the Cloud TPU is still 50% faster.

[...] Google will also donate access to 1,000 Cloud TPUs to top researchers under the TensorFlow Research Cloud program to see what people do with them.

Also at EETimes and Google.

Previously: Google Reveals Homegrown "TPU" For Machine Learning
Google Pulls Back the Covers on Its First Machine Learning Chip
Nvidia Compares Google's TPUs to the Tesla P40
NVIDIA's Volta Architecture Unveiled: GV100 and Tesla V100


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Friday May 19 2017, @11:40AM (1 child)

    by Anonymous Coward on Friday May 19 2017, @11:40AM (#512121)

    TL;DR TPU = google's new funky spy processor

    Tensor processing units (or TPUs) are application-specific integrated circuits (ASICs) developed specifically for machine learning. Compared to graphics processing units, they are designed explicitly for a higher volume of reduced precision computation (e.g. as little as 8-bit precision) with higher IOPS per watt, and lack hardware for rasterisation/texture mapping. The chip has been specifically designed for Google's TensorFlow framework, however Google still uses CPUs and GPUs for other types of machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets. -- https://en.wikipedia.org/wiki/Tensor_processing_unit [wikipedia.org]

    (shit like this would be good in the summary BTW HTH LOL)

    Starting Score:    0  points
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Friday May 19 2017, @12:19PM

    by Anonymous Coward on Friday May 19 2017, @12:19PM (#512134)

    > ... Compared to graphics processing units, they are designed explicitly for a higher volume of reduced precision computation (e.g. as little as 8-bit precision) ...

    When you say it like that, it almost sounds like ML could be viewed as an extension of Fuzzy Logic, or at least use Fuzzy as an analogy? (sorry, don't have any car analogies today)

    My take (back then) was that the Fuzzy proponents took a very low precision approach to control systems--but higher precision than the most simple controllers like a bang-bang thermostat. Instead of all that boring system identification and modeling of the plant in classical/analog control theory, Fuzzy promised quick 'n dirty stable controllers that worked "well enough" for some applications.