Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by martyb on Thursday November 21 2019, @10:16AM   Printer-friendly
from the chips-tidbits dept.

Intel's Xe for HPC: Ponte Vecchio with Chiplets, EMIB, and Foveros on 7nm, Coming 2021

Today is Intel's pre-SC19 HPC Devcon event, and with Raja Koduri on stage, the company has given a small glimpse into its high-performance compute accelerator strategy for 2021. Intel disclosed that its new hardware has the codename 'Ponte Vecchio' and will be built on a 7nm process, as well as some other small interesting bits.

[...] For high-performance computing, the presentation highlighted three key areas that the Xe architecture will be targeting. First is a flexible data-parallel vector matrix engine, which plays into the hands of AI acceleration and AI training in a big way. The second is high double precision (FP64) throughput, which has somewhat been disappearing of late due to reduced precision AI workloads, but is still a strong requirement in traditional HPC workloads like, weather, oil and gas, and astronomy. (We should point out that the diagram shows a 15x7 block of units, and Intel's Gen architecture uses 7 threads per execution unit.) The third tine in this trident is that Intel's HPC efforts will have a high cache and memory bandwidth, which the slides suggest will be directly coupled to individual compute chiplets ensuring a fast interconnect.

So in this case, enter Ponte Vecchio, named after the bridge that crosses the river Arno in Florence, Italy. This will be Intel's first 'exascale class' graphics solution, and is clearly using both chiplet technology (based on 7nm) and Foveros/die stacking packaging methods. We further confirmed after our call, based on discussions we had with Intel earlier in the year, that Ponte Vecchio will also use Intel's Embedded Multi-Die Interconnect Bridge (EMIB) technology, joining chiplets together. Pulling all the chips into a single package is fine, meanwhile GPU-to-GPU communication will occur through a Compute eXpress Link (CXL) interface, layered on top of PCIe 5.0.

Intel's 2021 Exascale Vision in Aurora: Two Sapphire Rapids CPUs with Six Ponte Vecchio GPUs

As part of today's announcement, Intel has put some information on the table for a typical 'Aurora' [supercomputer] compute note. While not giving any specifics such as core counts or memory types, the company stated that a standard node will contain two next generation CPUs and six next generation GPUs, all connected via new connectivity standards.

Those CPUs will be Sapphire Rapids CPUs, Intel's second generation of 10nm server processors coming after the Ice Lake Xeons. The announcement today reaffirmed that Sapphire Rapids is a 2021 processor; and likely a late 2021 processor, as the company also confirmed that Ice Lake will have its volume ramp through late 2020. Judging from Intel's images, Sapphire Rapids is set to have eight memory channels per processor, with enough I/O to connect to three GPUs. Within an Aurora node, two of these Sapphire Rapids CPUs will be paired together, and support the next generation of Intel Optane DC Persistent Memory (2nd Gen Optane DCPMM). We already know from other sources that Sapphire Rapids is likely to be DDR5 as well, although I don't believe Intel has said that outright at this point.

See also: Intel Xe GPU Architecture Detailed – Ponte Vecchio Xe HPC Exascale GPU With 1000s of EUs, Massive HBM Memory, Rambo Cache
AnandTech Exclusive: An Interview with Intel's Raja Koduri about Xe


Original Submission

Related Stories

Intel Not Focused on Defending High CPU Market Share 27 comments

Intel's CEO Bob Swan is looking beyond CPU market share:

"We think about having 30% share in a $230 billion [silicon] TAM[*] that we think is going to grow to $300 billion [silicon] TAM over the next 4 years, and frankly, I'm trying to destroy the thinking about having 90% share inside our company because, I think it limits our thinking, I think we miss technology transitions. we miss opportunities because we're, in some ways pre-occupied with protecting 90, instead of seeing a much bigger market with much more innovation going on, both Inside our four walls, and outside our four walls, so we come to work in the morning with a 30% share, with every expectation over the next several years, that we will play a larger and larger role in our customers success, and that doesn't just [mean] CPUs.

It means GPUs, it means Al, it does mean FPGAs, it means bringing these -technologies together so we're solving customers' problems. So, we're looking at a company with roughly 30% share in a $288 billion silicon TAM, not CPU TAM but silicon TAM. We look at the investments we've been making over the last several years in these kind of key technology inflections: 5G At autonomous, acquisitions, including Altera, that we think is more and more relevant both in the cloud but also ai the network and at the edge, and we see a much bigger opportunity, and our expectations are that we're going to gain our fair share at that much larger TAM by Investing in these key technology inflections." - Intel CEO Bob Swan

A 30% TAM in all of silicon would mean that Intel not only has more room to grow but is a lot more diversified as well. With the company working on the Nervana processor as well as its Xe GPU efforts, it seems poised to start clawing market share in new markets. Interestingly, it also means that Intel is not interested in defending its older title of being the CPU champion and will actually cede space to AMD where required. To me, this move is reminiscent of Lisa Su's decision to cede space in the GPU side of things to turn AMD around.

Intel's business strategy is now focused on whatever an "XPU" is as well as GPUs, FPGAs, machine learning accelerators, and next-generation memory/storage:

This means the company intends to continue making its heaviest bets in areas such as Optane storage, hardware Artificial Intelligence acceleration, 5G modems, data center networking, and more. The slide that really drives this commitment home comes from Q2's investor meeting that explicitly shows the company moving from a "protect and defend" strategy to a growth strategy. If this slide were in a sales meeting, it wouldn't say much—but delivered to the company's investors, it gains a bit of gravitas.

Most of this was revealed nearly six months ago at the company's May 2019 investor's meeting, but the Q3 investor's meeting last week continues with and strengthens this story for Intel's future growth, with slides more focused on Optane, network, and IoT/Edge market growth than with the traditional PC and server market.

[*] TAM = Total Addressable Market.

Related: Intel Promises "10nm" Chips by the End of 2019, and More
Intel's Interim CEO Robert Swan Becomes Full-Time CEO
AMD Gains Market Share in Desktops, Laptops, and Servers as of Q4 2018
PC Market Decline Blamed on Intel, AMD to See Gains
Intel Chip Shortages - at Least Another Quarter or Two to Go, Say PC Execs
Intel announces $20 billion increase in stock buybacks (from $4.5 billion)
Intel Xe High Performance Computing GPUs will use Chiplets


Original Submission

Intel Announces "oneAPI" for Programming CPUs, GPUs, FPGAs, Accelerators, etc. 13 comments

Write AI code once, run anywhere—it's not Java, it's Intel's oneAPI

Saturday afternoon (Nov. 16) at Supercomputing 2019, Intel launched a new programming model called oneAPI. Intel describes the necessity of tightly coupling middleware and frameworks directly to specific hardware as one of the largest pain points of AI/Machine Learning development. The oneAPI model is intended to abstract that tight coupling away, allowing developers to focus on their actual project and re-use the same code when the underlying hardware changes.

This sort of "write once, run anywhere" mantra is reminiscent of Sun's early pitches for the Java language. However, Bill Savage, general manager of compute performance for Intel, told Ars that's not an accurate characterization. Although each approach addresses the same basic problem—tight coupling to machine hardware making developers' lives more difficult and getting in the way of code re-use—the approaches are very different.

[...] When we questioned Savage about oneAPI's design and performance expectations, he distanced it firmly from Java, pointing out that there is no bytecode involved. Instead, oneAPI is a set of libraries that tie hardware-agnostic API calls directly to heavily optimized, low-level code that drives the actual hardware available in the local environment. So instead of "Java for Artificial Intelligence," the high-level takeaway is more along the lines of "OpenGL/DirectX for Artificial Intelligence."

For even higher-performance coding inside tight loops, oneAPI also introduces a new language variant called "Data Parallel C++" allowing even very low-level optimized code to target multiple architectures. Data Parallel C++ leverages and extends SYCL, a "single source" abstraction layer for OpenCL programming.

In its current version, a oneAPI developer still needs to target the basic hardware type he or she is coding for—for example, CPUs, GPUs, or FPGAs. Beyond that basic targeting, oneAPI keeps the code optimized for any supported hardware variant. This would, for example, allow users of a oneAPI-developed project to run the same code on either Nvidia's Tesla v100 or Intel's own newly released Ponte Vecchio GPU.

Related: Intel Xe High Performance Computing GPUs will use Chiplets


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by c0lo on Thursday November 21 2019, @10:33AM (1 child)

    by c0lo (156) Subscriber Badge on Thursday November 21 2019, @10:33AM (#922926) Journal

    Intel has a good ground to recover from AMD in CPU, tries its luck with GPU.

    --
    https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 1, Offtopic) by aristarchus on Thursday November 21 2019, @11:20AM

    by aristarchus (2645) on Thursday November 21 2019, @11:20AM (#922943) Journal

    Xe? So, they couldn't call it "Academi", or just the original "Blackwater", as in Mercenary Scum of the swamps of Southern Someplace? I trust Intel less and less, but that is always the way it is with intel, the more reliable it is, the less useful it is, and the more useful, the less reliable. Backdoored with the Micro$erf, are we? Kinda like Eric Prince or Peter Thiel, backdoored, that is.

  • (Score: 2) by epitaxial on Thursday November 21 2019, @01:43PM

    by epitaxial (3165) on Thursday November 21 2019, @01:43PM (#922971)

    Has been doing this on their mainframe processors since the 1980s.

(1)