Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Saturday March 18 2023, @12:44AM   Printer-friendly
from the nuka-flops dept.

Getting To Zettascale Without Needing Multiple Nuclear Power Plants:

There's no resting on your laurels in the HPC world, no time to sit back and bask in a hard-won accomplishment that was years in the making. The ticker tape has only now been swept up in the wake of the long-awaited celebration last year of finally reaching the exascale computing level, with the Frontier supercomputer housed at the Oak Ridge National Labs breaking that barrier.

With that in the rear-view mirror, attention is turning to the next challenge: Zettascale computing, some 1,000 times faster than what Frontier is running. In the heady months after his heralded 2021 return to Intel as CEO, Pat Gelsinger made headlines by saying the giant chip maker was looking at 2027 to reach zettascale.

Lisa Su, the chief executive officer who has led the remarkable turnaround at Intel's chief rival AMD, took the stage at ISSCC 2023 to talk about zettascale computing, laying out a much more conservative – some would say reasonable – timeline.

Looking at supercomputer performance trends over the past two-plus decades and the ongoing innovation in computing – think advanced package technologies, CPUs and GPUs, chiplet architectures, the pace of AI adoption, among others – Su calculated that the industry could reach the zettabyte scale within the next 10 years or so.

"We just recently passed a very significant milestone last year, which was the first exascale supercomputer," she said during her talk, noting that Frontier – built using HPE systems running on AMD chips – is "using a combination of CPUs and GPUs. Lots of technology in there. We were able to achieve an exascale of supercomputing, both from a performance standpoint and, more importantly, from an efficiency standpoint. Now we draw the line, assuming that [we can] keep that pace of innovation going. ... That's a challenge for all of us to think through. How might we achieve that?"

Supercomputing efficiency is doubling every 2.2 years, but that still projects to a zettascale system around 2035 consuming 500 megawatts at 2,140 gigaflops per watt (Nuclear Power Plant ~ 1 gigawatt).

Previous:
Intel to Explore RISC-V Architecture for Zettascale Supercomputers
Intel CEO Pat Gelsinger Says Moore's Law is Back
Supercomputers with Non-Von Neumann Architectures Could Reach "Zettascale" and "Yottascale"


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by guest reader on Saturday March 18 2023, @08:05AM (1 child)

    by guest reader (26132) Subscriber Badge on Saturday March 18 2023, @08:05AM (#1296835)

    More information about FPGAs and accelerators in HPC can be found in paper Myths and Legends of High-Performance Computing [arxiv.org], written by Satoshi Matsuoka, the head of Japan's largest supercomputing center.

    "Myth 3: Extreme Specialization as Seen in Smartphones Will Push Supercomputers Beyond Moore’s Law!"

    [...]In fact, the only successful “accelerator” in the recent history of HPC is a GPU.

    [...]The reason for the acceleration is primarily that the majority of the HPC workloads are memory bandwidth bound (Domke et al. 2021).

    [...]In fact, there are mainly three reasons why the plethora of customized accelerated hardware approach would fail. The first is the most important, in that acceleration via SoC integration of various SFU is largely to enable strong scaling at a compute node level, and will be subject to the limitations of the Amdahl’s law, i.e., reducing the time to solution, the potential speedup is bound by the ratio of accelerated and non-accelerable fractions of the algorithm, which quickly limits the speedup.

    "Myth 4: Everything Will Run on Some Accelerator!"

    By proper analysis of the workloads, we may find that CPUs may continue to play a dominant role, with accelerator being an important but less dominant sidekick.

    "Myth 5: Reconfigurable Hardware Will Give You 100X Speedup!"

    The question whether reconfigurable logic can replace or ament GPUs as accelerators is interesting. FPGAs will certainly have a harder time due to their high flexibility that comes at a cost. Units built from reconfigurable logic are 10–20x less energy and performance efficient in silicon area.

    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Monday March 20 2023, @07:52AM

    by Anonymous Coward on Monday March 20 2023, @07:52AM (#1297135)
    Of what use are all those teraflops, if you don't also have a very-high bandwidth interconnect capable of feeding the system with all the numbers you want it to crunch enough to keep it busy? High-performance I/O is every bit as much a part of HPC as fast computation.