Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday March 18 2023, @12:44AM   Printer-friendly
from the nuka-flops dept.

Getting To Zettascale Without Needing Multiple Nuclear Power Plants:

There's no resting on your laurels in the HPC world, no time to sit back and bask in a hard-won accomplishment that was years in the making. The ticker tape has only now been swept up in the wake of the long-awaited celebration last year of finally reaching the exascale computing level, with the Frontier supercomputer housed at the Oak Ridge National Labs breaking that barrier.

With that in the rear-view mirror, attention is turning to the next challenge: Zettascale computing, some 1,000 times faster than what Frontier is running. In the heady months after his heralded 2021 return to Intel as CEO, Pat Gelsinger made headlines by saying the giant chip maker was looking at 2027 to reach zettascale.

Lisa Su, the chief executive officer who has led the remarkable turnaround at Intel's chief rival AMD, took the stage at ISSCC 2023 to talk about zettascale computing, laying out a much more conservative – some would say reasonable – timeline.

Looking at supercomputer performance trends over the past two-plus decades and the ongoing innovation in computing – think advanced package technologies, CPUs and GPUs, chiplet architectures, the pace of AI adoption, among others – Su calculated that the industry could reach the zettabyte scale within the next 10 years or so.

"We just recently passed a very significant milestone last year, which was the first exascale supercomputer," she said during her talk, noting that Frontier – built using HPE systems running on AMD chips – is "using a combination of CPUs and GPUs. Lots of technology in there. We were able to achieve an exascale of supercomputing, both from a performance standpoint and, more importantly, from an efficiency standpoint. Now we draw the line, assuming that [we can] keep that pace of innovation going. ... That's a challenge for all of us to think through. How might we achieve that?"

Supercomputing efficiency is doubling every 2.2 years, but that still projects to a zettascale system around 2035 consuming 500 megawatts at 2,140 gigaflops per watt (Nuclear Power Plant ~ 1 gigawatt).

Previous:
Intel to Explore RISC-V Architecture for Zettascale Supercomputers
Intel CEO Pat Gelsinger Says Moore's Law is Back
Supercomputers with Non-Von Neumann Architectures Could Reach "Zettascale" and "Yottascale"


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by krishnoid on Saturday March 18 2023, @04:13AM (1 child)

    by krishnoid (1156) on Saturday March 18 2023, @04:13AM (#1296808)

    At least we'd then have a big player backing, supporting, and marketing wider acceptance of nuclear power. Unless they decide to run the datacenters on coal-fired steam engines.

    • (Score: 3, Insightful) by takyon on Saturday March 18 2023, @07:20AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday March 18 2023, @07:20AM (#1296832) Journal

      Zettascale is an arbitrary milestone like all the others. They're not going to want to use much more than 50 megawatts to reach it, less preferred. The more efficient computers become, the more processing power you will get.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by Ken_g6 on Saturday March 18 2023, @06:32AM (4 children)

    by Ken_g6 (3706) on Saturday March 18 2023, @06:32AM (#1296827)

    ...how big a "chip" can be made out of chiplets? A square foot? More?
    ...whether they'll finally integrate FPGAs into chips? They're not fast, but they can make any ASIC you need, and that can be much more efficient than the usual processor.
    ...if they'll use superconductors [ieee.org]?

    • (Score: 3, Interesting) by takyon on Saturday March 18 2023, @07:32AM (2 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday March 18 2023, @07:32AM (#1296834) Journal

      3D packaging is the way forward. The flatlands of 2D planar chips will become a city of skyscrapers. All the memory needed will have to go into the chip to hit efficiency targets.

      Optical interconnect to speed up non-stacked chiplet communication.

      Everything and the kitchen sink can be thrown into consumer and server chips. FPGAs but mostly ASICs I think. ML accelerators should be coming to Intel 14th/15th gen and Zen 5 desktop (already in Phoenix mobile).

      The Wafer Scale Engine approach could be used by supercomputers if it's worth it, but with 3D stacking as well.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Informative) by guest reader on Saturday March 18 2023, @08:05AM (1 child)

        by guest reader (26132) on Saturday March 18 2023, @08:05AM (#1296835)

        More information about FPGAs and accelerators in HPC can be found in paper Myths and Legends of High-Performance Computing [arxiv.org], written by Satoshi Matsuoka, the head of Japan's largest supercomputing center.

        "Myth 3: Extreme Specialization as Seen in Smartphones Will Push Supercomputers Beyond Moore’s Law!"

        [...]In fact, the only successful “accelerator” in the recent history of HPC is a GPU.

        [...]The reason for the acceleration is primarily that the majority of the HPC workloads are memory bandwidth bound (Domke et al. 2021).

        [...]In fact, there are mainly three reasons why the plethora of customized accelerated hardware approach would fail. The first is the most important, in that acceleration via SoC integration of various SFU is largely to enable strong scaling at a compute node level, and will be subject to the limitations of the Amdahl’s law, i.e., reducing the time to solution, the potential speedup is bound by the ratio of accelerated and non-accelerable fractions of the algorithm, which quickly limits the speedup.

        "Myth 4: Everything Will Run on Some Accelerator!"

        By proper analysis of the workloads, we may find that CPUs may continue to play a dominant role, with accelerator being an important but less dominant sidekick.

        "Myth 5: Reconfigurable Hardware Will Give You 100X Speedup!"

        The question whether reconfigurable logic can replace or ament GPUs as accelerators is interesting. FPGAs will certainly have a harder time due to their high flexibility that comes at a cost. Units built from reconfigurable logic are 10–20x less energy and performance efficient in silicon area.

        • (Score: 0) by Anonymous Coward on Monday March 20 2023, @07:52AM

          by Anonymous Coward on Monday March 20 2023, @07:52AM (#1297135)
          Of what use are all those teraflops, if you don't also have a very-high bandwidth interconnect capable of feeding the system with all the numbers you want it to crunch enough to keep it busy? High-performance I/O is every bit as much a part of HPC as fast computation.
    • (Score: 2) by turgid on Saturday March 18 2023, @11:36AM

      by turgid (4318) Subscriber Badge on Saturday March 18 2023, @11:36AM (#1296853) Journal

      In the embedded world the Programmable System on a Chip has been popular for quite a few years now. You get maybe two ARM cores plus an FPGA on the same chip. The manufacturers all provide Linux ports for them plus often things like FreeRTOS.

(1)