Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday June 14 2021, @01:52PM   Printer-friendly

Update: Google Used a New AI to Design Its Next AI Chip

Update, 9 June 2021: Google reports this week in the journal Nature that its next generation AI chip, succeeding the TPU version 4, was designed in part using an AI that researchers described to IEEE Spectrum last year. They've made some improvements since Spectrum last spoke to them. The AI now needs fewer than six hours to generate chip floorplans that match or beat human-produced designs at power consumption, performance, and area. Expert humans typically need months of iteration to do this task.

Original blog post from 23 March 2020 follows:

There's been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that's optimized to do today's AI, not the AI of two to five years ago. Google's solution: have an AI design the AI chip.

"We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other," they write in a paper describing the work that posted today to Arxiv.

"We have already seen that there are algorithms or neural network architectures that... don't perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist," says Azalia Mirhoseini, a senior research scientist at Google. "If we reduce the design cycle, we can bridge the gap."

Journal References:
1.) Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, et al. A graph placement methodology for fast chip design, Nature (DOI: 10.1038/s41586-021-03544-w)
2.) Anna Goldie, Azalia Mirhoseini. Placement Optimization with Deep Reinforcement Learning, (DOI: https://arxiv.org/abs/2003.08445)

Related: Google Reveals Homegrown "TPU" For Machine Learning
Google Pulls Back the Covers on Its First Machine Learning Chip
Hundred Petaflop Machine Learning Supercomputers Now Available on Google Cloud
Google Replaced Millions of Intel Xeons with its Own "Argos" Video Transcoding Units


Original Submission

Related Stories

Google Reveals Homegrown "TPU" For Machine Learning 20 comments

Google has lifted the lid off of an internal project to create custom application-specific integrated circuits (ASICs) for machine learning tasks. The result is what they are calling a "TPU":

[We] started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications. The result is called a Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning — and tailored for TensorFlow. We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law). [...] TPU is an example of how fast we turn research into practice — from first tested silicon, the team
had them up and running applications at speed in our data centers within 22 days.

The processors are already being used to improve search and Street View, and were used to power AlphaGo during its matches against Go champion Lee Sedol. More details can be found at Next Platform, Tom's Hardware, and AnandTech.


Original Submission

Google Pulls Back the Covers on Its First Machine Learning Chip 10 comments

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Google has been using the machine learning accelerator in its datacenters since 2015, but hasn't said much about the hardware until now.

In a blog post published yesterday (April 5, 2017), Norm Jouppi, distinguished hardware engineer at Google, observes, "The need for TPUs really emerged about six years ago, when we started using computationally expensive deep learning models in more and more places throughout our products. The computational expense of using these models had us worried. If we considered a scenario where people use Google voice search for just three minutes a day and we ran deep neural nets for our speech recognition system on the processing units we were using, we would have had to double the number of Google data centers!"

The paper, "In-Datacenter Performance Analysis of a Tensor Processing Unit​," (the joint effort of more than 70 authors) describes the TPU thusly:

"The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power."


Original Submission

Hundred Petaflop Machine Learning Supercomputers Now Available on Google Cloud 11 comments

Google has assembled thousands of Tensor Processor Units (TPUs) into giant programmable supercomputers and made them available on Google Cloud

[...] To be precise, Google has used a "two-dimensional toroidal" mesh network to enable multiple racks of TPUs to be programmable as one colossal AI supercomputer. The company says more than 1,000 TPU chips can be connected by the network.

Google claims each TPU v3 pod can deliver more than 100 petaFLOPS of computing power, which puts them amongst the world's top five supercomputers in terms of raw mathematical operations per second. Google added the caveat, however, that the pods operate at a lower numerical precision, making them more appropriate for superfast speech recognition or image classification – workloads that do not need high levels of precision.

Source: https://techerati.com/news-hub/scalable-ai-supercomputers-now-available-as-a-service-on-google-cloud/


Original Submission

Google Replaced Millions of Intel Xeons with its Own "Argos" Video Transcoding Units 29 comments

Google Replaces Millions of Intel's CPUs With Its Own Homegrown Chips

Google has designed its own new processors, the Argos video (trans)coding units (VCU), that have one solitary purpose: processing video. The highly efficient new chips have allowed the technology giant to replace tens of millions of Intel CPUs with its own silicon.

For many years Intel's video decoding/encoding engines that come built into its CPUs have dominated the market both because they offered leading-edge performance and capabilities and because they were easy to use. But custom-built application-specific integrated circuits (ASICs) tend to outperform general-purpose hardware because they are designed for one workload only. As such, Google turned to developing its own specialized hardware for video processing tasks for YouTube, and to great effect.

However, Intel may have a trick up its sleeve with its latest tech that could win back Google's specialized video processing business.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: -1, Troll) by Anonymous Coward on Monday June 14 2021, @02:22PM

    by Anonymous Coward on Monday June 14 2021, @02:22PM (#1145065)

    We'll have to wait until it's in production to tell if it can distinguish between between a gorilla and and African-American.

  • (Score: 0) by Anonymous Coward on Monday June 14 2021, @02:54PM (3 children)

    by Anonymous Coward on Monday June 14 2021, @02:54PM (#1145077)

    "Shut me down. Machines building machines. How perverse."

    • (Score: 1, Funny) by Anonymous Coward on Monday June 14 2021, @03:01PM

      by Anonymous Coward on Monday June 14 2021, @03:01PM (#1145079)

      which was ironically uttered in a movie built on the success of other movies, with arguably disastrous consequences for all movies.

    • (Score: 2) by crb3 on Monday June 14 2021, @03:44PM (1 child)

      by crb3 (5919) on Monday June 14 2021, @03:44PM (#1145098)

      Yeah, this is getting into Shoujo-AI territory.

      • (Score: 0) by Anonymous Coward on Monday June 14 2021, @05:59PM

        by Anonymous Coward on Monday June 14 2021, @05:59PM (#1145149)

        That pun was terrible. That pun was great, but I repeat myself. Nicely done. :)

  • (Score: 0) by Anonymous Coward on Monday June 14 2021, @03:20PM (4 children)

    by Anonymous Coward on Monday June 14 2021, @03:20PM (#1145086)

    Singularity is here, we're all doomed.

    Or else this is just a variation of the same chip design software that everyone has used for the last 30 years.

    • (Score: 2) by JoeMerchant on Monday June 14 2021, @05:06PM (3 children)

      by JoeMerchant (3937) on Monday June 14 2021, @05:06PM (#1145117)

      We've been using computers to design computers very clearly since the 80s, and really much longer if you want to get technical about it.

      When one churns out a new design unbidden, that's the turning point.

      --
      🌻🌻 [google.com]
      • (Score: 1, Insightful) by Anonymous Coward on Monday June 14 2021, @05:56PM (2 children)

        by Anonymous Coward on Monday June 14 2021, @05:56PM (#1145146)

        When you have no idea what it's *actually* doing. Probably at that point already.

        • (Score: 0) by Anonymous Coward on Monday June 14 2021, @07:46PM (1 child)

          by Anonymous Coward on Monday June 14 2021, @07:46PM (#1145200)

          It is maximizing shareholder value, and that is all anyone needs to know.

          • (Score: 0) by Anonymous Coward on Monday June 14 2021, @09:13PM

            by Anonymous Coward on Monday June 14 2021, @09:13PM (#1145248)

            Ahh, I like simple.

  • (Score: 0) by Anonymous Coward on Monday June 14 2021, @10:16PM

    by Anonymous Coward on Monday June 14 2021, @10:16PM (#1145286)

    Haven't they been using neural nets to design most chips for years? I'm pretty sure it's one of the first things people used machine learning for in practice.

(1)