Getting To Zettascale Without Needing Multiple Nuclear Power Plants:
There's no resting on your laurels in the HPC world, no time to sit back and bask in a hard-won accomplishment that was years in the making. The ticker tape has only now been swept up in the wake of the long-awaited celebration last year of finally reaching the exascale computing level, with the Frontier supercomputer housed at the Oak Ridge National Labs breaking that barrier.
With that in the rear-view mirror, attention is turning to the next challenge: Zettascale computing, some 1,000 times faster than what Frontier is running. In the heady months after his heralded 2021 return to Intel as CEO, Pat Gelsinger made headlines by saying the giant chip maker was looking at 2027 to reach zettascale.
Lisa Su, the chief executive officer who has led the remarkable turnaround at Intel's chief rival AMD, took the stage at ISSCC 2023 to talk about zettascale computing, laying out a much more conservative – some would say reasonable – timeline.
Looking at supercomputer performance trends over the past two-plus decades and the ongoing innovation in computing – think advanced package technologies, CPUs and GPUs, chiplet architectures, the pace of AI adoption, among others – Su calculated that the industry could reach the zettabyte scale within the next 10 years or so.
"We just recently passed a very significant milestone last year, which was the first exascale supercomputer," she said during her talk, noting that Frontier – built using HPE systems running on AMD chips – is "using a combination of CPUs and GPUs. Lots of technology in there. We were able to achieve an exascale of supercomputing, both from a performance standpoint and, more importantly, from an efficiency standpoint. Now we draw the line, assuming that [we can] keep that pace of innovation going. ... That's a challenge for all of us to think through. How might we achieve that?"
Supercomputing efficiency is doubling every 2.2 years, but that still projects to a zettascale system around 2035 consuming 500 megawatts at 2,140 gigaflops per watt (Nuclear Power Plant ~ 1 gigawatt).
Previous:
Intel to Explore RISC-V Architecture for Zettascale Supercomputers
Intel CEO Pat Gelsinger Says Moore's Law is Back
Supercomputers with Non-Von Neumann Architectures Could Reach "Zettascale" and "Yottascale"
(Score: 2) by krishnoid on Saturday March 18, @04:13AM (1 child)
At least we'd then have a big player backing, supporting, and marketing wider acceptance of nuclear power. Unless they decide to run the datacenters on coal-fired steam engines.
(Score: 3, Insightful) by takyon on Saturday March 18, @07:20AM
Zettascale is an arbitrary milestone like all the others. They're not going to want to use much more than 50 megawatts to reach it, less preferred. The more efficient computers become, the more processing power you will get.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Ken_g6 on Saturday March 18, @06:32AM (4 children)
...how big a "chip" can be made out of chiplets? A square foot? More?
...whether they'll finally integrate FPGAs into chips? They're not fast, but they can make any ASIC you need, and that can be much more efficient than the usual processor.
...if they'll use superconductors [ieee.org]?
(Score: 3, Interesting) by takyon on Saturday March 18, @07:32AM (2 children)
3D packaging is the way forward. The flatlands of 2D planar chips will become a city of skyscrapers. All the memory needed will have to go into the chip to hit efficiency targets.
Optical interconnect to speed up non-stacked chiplet communication.
Everything and the kitchen sink can be thrown into consumer and server chips. FPGAs but mostly ASICs I think. ML accelerators should be coming to Intel 14th/15th gen and Zen 5 desktop (already in Phoenix mobile).
The Wafer Scale Engine approach could be used by supercomputers if it's worth it, but with 3D stacking as well.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Informative) by guest reader on Saturday March 18, @08:05AM (1 child)
More information about FPGAs and accelerators in HPC can be found in paper Myths and Legends of High-Performance Computing [arxiv.org], written by Satoshi Matsuoka, the head of Japan's largest supercomputing center.
"Myth 3: Extreme Specialization as Seen in Smartphones Will Push Supercomputers Beyond Moore’s Law!"
"Myth 4: Everything Will Run on Some Accelerator!"
"Myth 5: Reconfigurable Hardware Will Give You 100X Speedup!"
(Score: 0) by Anonymous Coward on Monday March 20, @07:52AM
(Score: 2) by turgid on Saturday March 18, @11:36AM
In the embedded world the Programmable System on a Chip has been popular for quite a few years now. You get maybe two ARM cores plus an FPGA on the same chip. The manufacturers all provide Linux ports for them plus often things like FreeRTOS.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].