Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 08 2019, @05:34PM   Printer-friendly
from the if-you-have-to-ask-how-much-it-costs... dept.

Google has assembled thousands of Tensor Processor Units (TPUs) into giant programmable supercomputers and made them available on Google Cloud

[...] To be precise, Google has used a "two-dimensional toroidal" mesh network to enable multiple racks of TPUs to be programmable as one colossal AI supercomputer. The company says more than 1,000 TPU chips can be connected by the network.

Google claims each TPU v3 pod can deliver more than 100 petaFLOPS of computing power, which puts them amongst the world's top five supercomputers in terms of raw mathematical operations per second. Google added the caveat, however, that the pods operate at a lower numerical precision, making them more appropriate for superfast speech recognition or image classification – workloads that do not need high levels of precision.

Source: https://techerati.com/news-hub/scalable-ai-supercomputers-now-available-as-a-service-on-google-cloud/


Original Submission

Related Stories

Update: Google Used a New AI to Design Its Next AI Chip 11 comments

Update: Google Used a New AI to Design Its Next AI Chip

Update, 9 June 2021: Google reports this week in the journal Nature that its next generation AI chip, succeeding the TPU version 4, was designed in part using an AI that researchers described to IEEE Spectrum last year. They've made some improvements since Spectrum last spoke to them. The AI now needs fewer than six hours to generate chip floorplans that match or beat human-produced designs at power consumption, performance, and area. Expert humans typically need months of iteration to do this task.

Original blog post from 23 March 2020 follows:

There's been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that's optimized to do today's AI, not the AI of two to five years ago. Google's solution: have an AI design the AI chip.

"We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other," they write in a paper describing the work that posted today to Arxiv.

"We have already seen that there are algorithms or neural network architectures that... don't perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist," says Azalia Mirhoseini, a senior research scientist at Google. "If we reduce the design cycle, we can bridge the gap."

Journal References:
1.) Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, et al. A graph placement methodology for fast chip design, Nature (DOI: 10.1038/s41586-021-03544-w)
2.) Anna Goldie, Azalia Mirhoseini. Placement Optimization with Deep Reinforcement Learning, (DOI: https://arxiv.org/abs/2003.08445)

Related: Google Reveals Homegrown "TPU" For Machine Learning
Google Pulls Back the Covers on Its First Machine Learning Chip
Hundred Petaflop Machine Learning Supercomputers Now Available on Google Cloud
Google Replaced Millions of Intel Xeons with its Own "Argos" Video Transcoding Units


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by ikanreed on Wednesday May 08 2019, @05:40PM

    by ikanreed (3164) Subscriber Badge on Wednesday May 08 2019, @05:40PM (#840900) Journal
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 08 2019, @06:16PM

    by Anonymous Coward on Wednesday May 08 2019, @06:16PM (#840924)
    didnt they spend all morning going on about privacy?
    whats the use case for superfast speech and image recognition
  • (Score: 2) by bob_super on Wednesday May 08 2019, @07:25PM (7 children)

    by bob_super (1357) on Wednesday May 08 2019, @07:25PM (#840946)

    > Google has used a "two-dimensional toroidal" mesh network

    Can someone explain to me a two-dimensional toroidal anything ?
    Call marketing, and tell them to review the bullshitometer readings, and upgrade the press release to Tetradimensional Toroidal, please.
    Cause less than 3 ain't gonna cut it.

    • (Score: 3, Informative) by mhajicek on Wednesday May 08 2019, @09:26PM (1 child)

      by mhajicek (51) on Wednesday May 08 2019, @09:26PM (#841013)

      It's a ring network.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 0) by Anonymous Coward on Wednesday May 08 2019, @11:37PM

        by Anonymous Coward on Wednesday May 08 2019, @11:37PM (#841078)

        It's a ring of ring networks because a 3d torus is made of a 2d toroidal mesh

    • (Score: 2) by c0lo on Wednesday May 08 2019, @11:57PM (4 children)

      by c0lo (156) Subscriber Badge on Wednesday May 08 2019, @11:57PM (#841086) Journal

      Can someone explain to me a two-dimensional toroidal anything ?

      Take a 2D grid with the 'travelling' rule of "whenever you reach the border along any one dimension, you wrap around that dimension". This make your grid adopt a toroid topology.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by bob_super on Thursday May 09 2019, @12:12AM (3 children)

        by bob_super (1357) on Thursday May 09 2019, @12:12AM (#841094)

        I know. But if it's 2D, then it's a Pac-Man topology.
        Toroids are 3D objects.

        • (Score: 2) by c0lo on Thursday May 09 2019, @12:51AM (1 child)

          by c0lo (156) Subscriber Badge on Thursday May 09 2019, @12:51AM (#841108) Journal

          Topology doesn't care as long as you preserve the navigational - in this case, connectivity/networking - properties.
          The RAM in your computer is 3D too, yet you are addressing it as 1D.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 2) by bob_super on Thursday May 09 2019, @12:56AM

            by bob_super (1357) on Thursday May 09 2019, @12:56AM (#841111)

            Hey ! Work with me here !
            I wanna see the faces of CEOs when they have to deliver the presentation to the press, and their schmancy multi-million dollar supermachines run a Pac-Man topology.

        • (Score: 0) by Anonymous Coward on Thursday May 09 2019, @08:10AM

          by Anonymous Coward on Thursday May 09 2019, @08:10AM (#841244)

          The surface of a torus is 2D.

  • (Score: 0) by Anonymous Coward on Wednesday May 08 2019, @10:18PM

    by Anonymous Coward on Wednesday May 08 2019, @10:18PM (#841051)

    So they're using Pentium chips to power Google Cloud now?

(1)