Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday December 19 2016, @11:24AM   Printer-friendly [Skip to comment(s)]
from the not-your-father's-hpc-platform dept.

Arthur T Knackerbracket has found the following story:

At the AMD Tech Summit in Sonoma, Calif., last week (Dec. 7-9), CEO Lisa Su unveiled the company's vision to accelerate machine intelligence over the next five to ten years with an open and heterogeneous computing approach and a new suite of hardware and open-source software offerings.

The roots for this strategy can be traced back to the company's acquisition of graphics chipset manufacturer ATI in 2006 and the subsequent launch of the CPU-GPU hybrid Fusion generation of computer processors. In 2012, the Fusion platform matured into the Heterogeneous Systems Architecture (HSA), now owned and maintained by the HSA Foundation.

Ten years since launching Fusion, AMD believes it has found the killer app for heterogeneous computing in machine intelligence, which is driven by exponential data surges.

"We generate 2.5 quintillion bytes of data every single day – whether you're talking about Tweets, YouTube videos, Facebook, Instagram, Google searches or emails," said Su. "We have incredible amounts of data out there. And the thing about this data is it's all different – text, video, audio, monitoring data. With all this different data, you really are in a heterogeneous system and that means you need all types of computing to satisfy this demand. You need CPUs, you need GPUs, you need accelerators, you need ASICS, you need fast interconnect technology. The key to it is it's a heterogeneous computing architecture.

"Why are we so excited about this? We've actually been talking about heterogeneous computing for the last ten years," Su continued. "This is the reason we wanted to bring CPUs and GPUs together under one roof and we were doing this when people didn't understand why we were doing this and we were also learning about what the market was and where the market needed these applications, but it's absolutely clear that for the machine intelligence era, we need heterogeneous compute."

Aiming to boost the performance, efficiency, and ease of implementation of deep learning workloads, AMD is introducing a brand-new hardware platform, Radeon Instinct, and new Radeon open source software solutions.

[...] "We are going to address key verticals that leverage a common infrastructure," said Raja Koduri, senior vice president and chief architect of Radeon Technologies Group. "The building block is our Radeon Instinct hardware platform, and above that we have the completely open source Radeon software platform. On top of that we're building optimized machine learning frameworks and libraries."

AMD is also investing in open interconnect technologies for heterogeneous accelerators; the company is a founding member of CCIX, Gen-Z and OpenCAPI.

[...] The AMD Tech Summit is a follow-on to the inaugural summit that debuted last December (2015). That first event was initiated by Raja Koduri as a team-building activity for the newly minted Radeon Technologies Group. The initial team of about 80, essentially hand-picked by Koduri to focus on graphics, met in Sonoma along with about 15 members of the press. The event was expanded this year to accommodate other AMD departments and nearly 100 media and analyst representatives.

-- submitted from IRC


Original Submission

Related Stories

AMD Announces Milan-X Epyc With 3D V-Cache, Bergamo, and First MCM GPU: Instinct MI200 16 comments

AMD has announced its "Milan-X" Epyc CPUs, which reuse the same Zen 3 chiplets found in "Milan" Epyc CPUs with up to 64 cores, but with triple the L3 cache using stacked "3D V-Cache" technology designed in partnership with TSMC. This means that some Epyc CPUs will go from having 256 MiB of L3 cache to a whopping 768 MiB (804 MiB of cache when including L1 and L2 cache). 2-socket servers using Milan-X can have over 1.5 gigabytes of L3 cache. The huge amount of additional cache results in average performance gains in "targeted workloads" of around 50% according to AMD. Microsoft found an 80% improvement in some workloads (e.g. computational fluid dynamics) due to the increase in effective memory bandwidth.

AMD's next-generation of Instinct high-performance computing GPUs will use a multi-chip module (MCM) design, essentially chiplets for GPUs. The Instinct MI250X includes two "CDNA 2" dies for a total of 220 compute units, compared to 120 compute units for the previous MI100 monolithic GPU. Performance is roughly doubled (FP32 Vector/Matrix, FP16 Matrix, INT8 Matrix), quadrupled (FP64 Vector), or octupled (FP64 Matrix). VRAM has been quadrupled to 128 GB of High Bandwidth Memory. Power consumption of the world's first MCM GPU will be high, as it has a 560 Watt TDP.

The Frontier exascale supercomputer will use both Epyc CPUs and Instinct MI200 GPUs.

AMD officially confirmed that upcoming Zen 4 "Genoa" Epyc CPUs made on a TSMC "5nm" node will have up to 96 cores. AMD also announced "Bergamo", a 128-core "Zen 4c" Epyc variant, with the 'c' indicating "cloud-optimized". This is a denser, more power-efficient version of Zen 4 with a smaller cache. According to a recent leak, Zen 4c chiplets will have 16 cores instead of 8, will retain hyperthreading, and will be used in future Zen 5 Ryzen desktop CPUs as AMD's answer to Intel's Alder Lake heterogeneous ("big.LITTLE") x86 microarchitecture.

Also at Tom's Hardware (Milan-X).

Previously: AMD Reveals 'Instinct' for Machine Intelligence
AMD Launches "Milan" Epyc Server CPUs, with Zen 3 and up to 64 Cores
AMD at Computex 2021: 5000G APUs, 6000M Mobile GPUs, FidelityFX Super Resolution, and 3D Chiplets
AMD Unveils New Ryzen V-Cache Details at HotChips 33
AMD Aims to Increase Energy Efficiency of Epyc CPUs and Instinct AI Accelerators 30x by 2025


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday December 19 2016, @11:31AM

    by Anonymous Coward on Monday December 19 2016, @11:31AM (#443070)

    The very idea is protocol-ist!

    internet > world wide web > social media

    I never use social media, and I don't even like the web. Http is so boring. There are so many other more exciting protocols on the internet.

    • (Score: 0) by Anonymous Coward on Monday December 19 2016, @04:02PM

      by Anonymous Coward on Monday December 19 2016, @04:02PM (#443174)

      They included email; this at least involves the protocol SMTP, which is much older than http.

      The times when lots of data were generated over NNTP are long over, the same is true for gopher and ftp.

    • (Score: 2) by LoRdTAW on Monday December 19 2016, @10:20PM

      by LoRdTAW (3755) on Monday December 19 2016, @10:20PM (#443396) Journal

      There are so many other more exciting protocols on the internet

      Like 9p.

  • (Score: 1, Touché) by Anonymous Coward on Monday December 19 2016, @12:20PM

    by Anonymous Coward on Monday December 19 2016, @12:20PM (#443084)

    "We generate 2.5 quintillion bytes of data every single day – whether you're talking about Tweets, YouTube videos, Facebook, Instagram, Google searches or emails," said Su.

    Sad!

    • (Score: 0) by Anonymous Coward on Monday December 19 2016, @12:22PM

      by Anonymous Coward on Monday December 19 2016, @12:22PM (#443085)

      How much neural networks could result in compression of all that data down to managable sizes, and maybe also help flag intellectual undesirables for our future 'Temporary Solution'. (Since the 'Final Solution' would be wiping out all of humanity.)

      • (Score: 0) by Anonymous Coward on Monday December 19 2016, @12:43PM

        by Anonymous Coward on Monday December 19 2016, @12:43PM (#443098)

        First they came for the jews, and I spoke out, because I was an anti-sementic troll! Then they came for the trolls, and nobody spoke out, because everybody hates trolls.

  • (Score: 0) by Anonymous Coward on Monday December 19 2016, @12:30PM

    by Anonymous Coward on Monday December 19 2016, @12:30PM (#443090)

    "Instinct" is a marketing name

  • (Score: 0) by Anonymous Coward on Monday December 19 2016, @12:43PM

    by Anonymous Coward on Monday December 19 2016, @12:43PM (#443097)

    I noticed that in general people complain about word choice when they can't find anything wrong with the content, and they are envious.
    I think it's a very nice thing that the AMD people are doing, but I don't have the background to understand everything properly.
    I am excited about a future when my simulations will run more efficiently because computations will be distributed in a smarter way, although I don't know if I'll be smart enough to program for these complicated new machines.

    • (Score: 0) by Anonymous Coward on Monday December 19 2016, @12:46PM

      by Anonymous Coward on Monday December 19 2016, @12:46PM (#443100)

      Don't worry, dummy. After you get laidoff from your codemonkey job, you can get a new job as a crash dummy for Tesla.

      • (Score: 0) by Anonymous Coward on Monday December 19 2016, @01:14PM

        by Anonymous Coward on Monday December 19 2016, @01:14PM (#443109)

        I don't have a codemonkey job.

        • (Score: 0) by Anonymous Coward on Tuesday December 20 2016, @03:32AM

          by Anonymous Coward on Tuesday December 20 2016, @03:32AM (#443525)

          AC #3 here.

          What non-codemonkey job do you have, if one? If not, how do you spend your waking time?

    • (Score: 2) by takyon on Monday December 19 2016, @01:10PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday December 19 2016, @01:10PM (#443106) Journal

      AMD has to get through NVIDIA, which controls most of machine learning right now, as well Intel, Google with its TPUs, and maybe Microsoft (FPGAs).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by LoRdTAW on Monday December 19 2016, @10:31PM

        by LoRdTAW (3755) on Monday December 19 2016, @10:31PM (#443404) Journal

        Intel bought out Altera, second biggest FPGA maker, to leverage their FPGA's for this type of stuff. And interestingly enough, Altera is no more. It's all Intel branded now.

    • (Score: 0) by Anonymous Coward on Monday December 19 2016, @02:00PM

      by Anonymous Coward on Monday December 19 2016, @02:00PM (#443121)

      You are too dumb to read through marketing gibberish. The long summary boils down to AMD announcing marketing brand "Radeon Insticint," sprinkled with buzzwords ("machine learning", "heterogenous", "platform"), devoid of any specifics.

  • (Score: 0) by Anonymous Coward on Monday December 19 2016, @08:31PM

    by Anonymous Coward on Monday December 19 2016, @08:31PM (#443310)

    good job for recognizing that FOSS is the future(or present, actually).