Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday November 08 2018, @02:35AM   Printer-friendly
from the accelerator++ dept.

AMD Announces Radeon Instinct MI60 & MI50 Accelerators: Powered By 7nm Vega

As part of this morning's Next Horizon event, AMD formally announced the first two accelerator cards based on the company's previously revealed 7nm Vega GPU. Dubbed the Radeon Instinct MI60 and Radeon Instinct MI50, the two cards are aimed squarely at the enterprise accelerator market, with AMD looking to significantly improve their performance competitiveness in everything from HPC to machine learning.

Both cards are based on AMD's 7nm GPU, which although we've known about at a high level for some time now, we're only finally getting some more details on. GPU is based on a refined version of AMD's existing Vega architecture, essentially adding compute-focused features to the chip that are necessary for the accelerator market. Interestingly, in terms of functional blocks here, 7nm Vega is actually rather close to the existing 14nm "Vega 10" GPU: both feature 64 CUs and HBM2. The difference comes down to these extra accelerator features, and the die size itself.

With respect to accelerator features, 7nm Vega and the resulting MI60 & MI50 cards differentiates itself from the previous Vega 10-powered MI25 in a few key areas. 7nm Vega brings support for half-rate double precision – up from 1/16th rate – and AMD is supporting new low precision data types as well. These INT8 and INT4 instructions are especially useful for machine learning inferencing, where high precision isn't necessary, with AMD able to get up to 4x the perf of an FP16/INT16 data type when using the smallest INT4 data type. However it's not clear from AMD's presentation how flexible these new data types are – and with what instructions they can be used – which will be important for understanding the full capabilities of the new GPU. All told, AMD is claiming a peak throughput of 7.4 TFLOPS FP64, 14.7 TFLOPS FP32, and 118 TOPS for INT4.

Previously: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018

Related: AMD Previews Zen 2 Epyc CPUs with up to 64 Cores, New "Chiplet" Design


Original Submission

Related Stories

AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018 6 comments

AMD 7nm Vega Radeon Instinct GPU AI Accelerators Enter Lab Testing

AMD's current generation Vega graphics architecture – which powers its Radeon RX Vega family of graphics cards -- is based on a 14nm manufacturing process, but the chip company is already moving along with next generation process technology. During the company's conference call with analysts following its Q1 2018 earnings report (which it knocked out of the park, by the way), AMD CEO Dr. Lisa Su made some comments regarding its upcoming 7nm GPUs.

"I'm also happy to report that our next-generation 7-nanometer Radeon Instinct product, optimized for machine learning workloads, is running in our labs," said Dr. Su. "We remain on track to provide samples to customers later this year."

If you recall, Radeon Instinct is AMD's product line for machine intelligences and deep learning accelerators. The current lineup features a mixture of Polaris- and Vega-based GPUs and could be considered competitors for NVIDIA's Tesla family of products. [...] According to commentary from AMD at this year's CES, 7nm Vega products for mobile along with the 7nm Radeon Instinct accelerators will ship during the latter half of 2018.

From The Next Platform, "The Slow But Sure Return Of AMD In The Datacenter":

AMD Previews Zen 2 Epyc CPUs with up to 64 Cores, New "Chiplet" Design 9 comments

AMD has announced the next generation of its Epyc server processors, with up to 64 cores (128 threads) each. Instead of an 8-core "core complex" (CCX), AMD's 64-core chips will feature 8 "chiplets" with 8 cores each:

AMD on Tuesday formally announced its next-generation EPYC processor code-named Rome. The new server CPU will feature up to 64 cores featuring the Zen 2 microarchitecture, thus providing at least two times higher performance per socket than existing EPYC chips.

As discussed in a separate story covering AMD's new 'chiplet' design approach, AMD EPYC 'Rome' processor will carry multiple CPU chiplets manufactured using TSMC's 7 nm fabrication process as well as an I/O die produced at a 14 nm node. As it appears, high-performance 'Rome' processors will use eight CPU chiplets offering 64 x86 cores in total.

Why chiplets?

Separating CPU chiplets from the I/O die has its advantages because it enables AMD to make the CPU chiplets smaller as physical interfaces (such as DRAM and Infinity Fabric) do not scale that well with shrinks of process technology. Therefore, instead of making CPU chiplets bigger and more expensive to manufacture, AMD decided to incorporate DRAM and some other I/O into a separate chip. Besides lower costs, the added benefit that AMD is going to enjoy with its 7 nm chiplets is ability to easier[sic] bin new chips for needed clocks and power, which is something that is hard to estimate in case of servers.

AMD also announced that Zen 4 is under development. It could be made on a "5nm" node, although that is speculation. The Zen 3 microarchitecture will be made on TSMC's N7+ process ("7nm" with more extensive use of extreme ultraviolet lithography).

AMD's Epyc CPUs will now be offered on Amazon Web Services.

AnandTech live blog of New Horizon event.

Previously: AMD Epyc 7000-Series Launched With Up to 32 Cores
TSMC Will Make AMD's "7nm" Epyc Server CPUs
Intel Announces 48-core Xeons Using Multiple Dies, Ahead of AMD Announcement

Related: Cray CS500 Supercomputers to Include AMD's Epyc as a Processor Option
Oracle Offers Servers with AMD's Epyc to its Cloud Customers


Original Submission

AMD and Nvidia's Latest GPUs Are Expensive and Unappealing 25 comments

AMD, Nvidia Have Launched the Least-Appealing GPU Upgrades in History

Yesterday, AMD launched the Radeon VII, the first 7nm GPU. The card is intended to compete with Nvidia's RTX family of Turing-class GPUs, and it does, broadly matching the RTX 2080. It also matches the RTX 2080 on price, at $700. Because this card began life as a professional GPU intended for scientific computing and AI/ML workloads, it's unlikely that we'll see lower-end variants. That section of AMD's product stack will be filled by 7nm Navi, which arrives later this year.

Navi will be AMD's first new 7nm GPU architecture and will offer a chance to hit 'reset' on what has been, to date, the least compelling suite of GPU launches AMD and Nvidia have ever collectively kicked out the door. Nvidia has relentlessly moved its stack pricing higher while holding performance per dollar mostly constant. With the RTX 2060 and GTX 1070 Ti fairly evenly matched across a wide suite of games, the question of whether the RTX 2060 is better priced largely hinges on whether you stick to formal launch pricing for both cards or check historical data for actual price shifts.

Such comparisons are increasingly incidental, given that Pascal GPU prices are rising and cards are getting harder to find, but they aren't meaningless for people who either bought a Pascal GPU already or are willing to consider a used card. If you're an Nvidia fan already sitting on top of a high-end Pascal card, Turing doesn't offer you a great deal of performance improvement.

AMD has not covered itself in glory, either. The Radeon VII is, at least, unreservedly faster than the Vega 64. There's no equivalent last-generation GPU in AMD's stack to match it. But it also duplicates the Vega 64's overall power and noise profile, limiting the overall appeal, and it matches the RTX 2080's bad price. A 1.75x increase in price for a 1.32x increase in 4K performance isn't a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty.

Rumors and leaks have suggested that Nvidia will release a Turing-based GPU called the GTX 1660 Ti (which has also been referred to as "1160"), with a lower price but missing the dedicated ray-tracing cores of the RTX 2000-series. AMD is expected to release "7nm" Navi GPUs sometime during 2019.

Radeon VII launch coverage also at AnandTech, Tom's Hardware.

Related: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
AMD Announces "7nm" Vega GPUs for the Enterprise Market
Nvidia Announces RTX 2060 GPU
AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU
AMD Responds to Radeon VII Short Supply Rumors


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by tibman on Thursday November 08 2018, @04:53PM (2 children)

    by tibman (134) Subscriber Badge on Thursday November 08 2018, @04:53PM (#759419)

    Wish my job required cutting edge hardware so i could play with this stuff. Business application development blows. (okay, done venting, back to work!)

    Anyone here need tools like this? For machine learning, compute, password breaking, or whatever?

    --
    SN won't survive on lurkers alone. Write comments.
    • (Score: 2) by takyon on Thursday November 08 2018, @06:04PM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday November 08 2018, @06:04PM (#759457) Journal

      A GPU for machine learning? It's deepfakes [wikipedia.org] time!

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Thursday November 08 2018, @06:21PM

        by Anonymous Coward on Thursday November 08 2018, @06:21PM (#759462)

        Not unless they support CUDA.

        It is annoying, but OpenCL does not cut it.

(1)