Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by janrinok on Saturday March 18 2023, @12:44AM   Printer-friendly
from the nuka-flops dept.

Getting To Zettascale Without Needing Multiple Nuclear Power Plants:

There's no resting on your laurels in the HPC world, no time to sit back and bask in a hard-won accomplishment that was years in the making. The ticker tape has only now been swept up in the wake of the long-awaited celebration last year of finally reaching the exascale computing level, with the Frontier supercomputer housed at the Oak Ridge National Labs breaking that barrier.

With that in the rear-view mirror, attention is turning to the next challenge: Zettascale computing, some 1,000 times faster than what Frontier is running. In the heady months after his heralded 2021 return to Intel as CEO, Pat Gelsinger made headlines by saying the giant chip maker was looking at 2027 to reach zettascale.

Lisa Su, the chief executive officer who has led the remarkable turnaround at Intel's chief rival AMD, took the stage at ISSCC 2023 to talk about zettascale computing, laying out a much more conservative – some would say reasonable – timeline.

Looking at supercomputer performance trends over the past two-plus decades and the ongoing innovation in computing – think advanced package technologies, CPUs and GPUs, chiplet architectures, the pace of AI adoption, among others – Su calculated that the industry could reach the zettabyte scale within the next 10 years or so.

"We just recently passed a very significant milestone last year, which was the first exascale supercomputer," she said during her talk, noting that Frontier – built using HPE systems running on AMD chips – is "using a combination of CPUs and GPUs. Lots of technology in there. We were able to achieve an exascale of supercomputing, both from a performance standpoint and, more importantly, from an efficiency standpoint. Now we draw the line, assuming that [we can] keep that pace of innovation going. ... That's a challenge for all of us to think through. How might we achieve that?"

Supercomputing efficiency is doubling every 2.2 years, but that still projects to a zettascale system around 2035 consuming 500 megawatts at 2,140 gigaflops per watt (Nuclear Power Plant ~ 1 gigawatt).

Previous:
Intel to Explore RISC-V Architecture for Zettascale Supercomputers
Intel CEO Pat Gelsinger Says Moore's Law is Back
Supercomputers with Non-Von Neumann Architectures Could Reach "Zettascale" and "Yottascale"


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by krishnoid on Saturday March 18 2023, @04:13AM (1 child)

    by krishnoid (1156) on Saturday March 18 2023, @04:13AM (#1296808)

    At least we'd then have a big player backing, supporting, and marketing wider acceptance of nuclear power. Unless they decide to run the datacenters on coal-fired steam engines.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by takyon on Saturday March 18 2023, @07:20AM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday March 18 2023, @07:20AM (#1296832) Journal

    Zettascale is an arbitrary milestone like all the others. They're not going to want to use much more than 50 megawatts to reach it, less preferred. The more efficient computers become, the more processing power you will get.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]