Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Submission Preview

Link to Story

Forget 1 Exaflops. How about 100 Exaflops in 2030?

Accepted submission by takyon at 2016-07-06 19:50:51
Hardware

Al Gara, chief architect for exascale computing at Intel, gave a talk at the ISC16 supercomputing conference in which he made the case for 100 exaflops supercomputing in the next 15 years [nextplatform.com]:

Dreaming is the foundation of the technology industry, and supercomputing has always been where the most REM action takes place among the best and brightest minds in computing, storage, and networking – as it should be. But to attain the 100 exaflops level of performance that is possible within the next fifteen years, the supercomputing sector is going to have to do a lot of hard engineering and borrow more than a few technologies from the hyperscalers who are also, in their own unique ways, pushing the scale envelope.

This, among other themes, was the focus of a recent talk by Al Gara, chief architect for exascale computing at Intel, at the ISC16 supercomputing conference in Germany. Gara, who was one of the architects of IBM's BlueGene family of massively parallel supercomputers before joining Intel, mapped out the scalability issues that the supercomputing industry faces as it tries to push performance ever higher. And the result of his detailed presentation was both optimistic in that he thought the industry, working collaboratively, could hit the 100 exaflops performance level by 2030 and somewhat pessimistic in that such a performance gain fifteen years from now – about 1,000X if you round generously – was nothing like the 50,000X we have seen in the past fifteen years.

Aspects discussed include bringing memory closer to processors using technologies like Hybrid Memory Cube [wikipedia.org] and High Bandwidth Memory [wikipedia.org], the need for ever-cheaper fiber optics to make interconnects faster, and the need for even better transistors and concurrency. One amusing prediction: power consumption of a 100 exaflops supercomputer will rise up to 80 megawatts instead of being capped around 20 megawatts, but this could be mitigated if the price of a megawatt-year decreased from around $1,000,000 to $500,000. This would ensure that the energy needed to run the machine would cost no more than half the total budget for the 100 exaflops supercomputer. I guess we need to bet on small-scale nuclear fusion [nextbigfuture.com] to help bring that cost down. Perhaps the use of ever-faster high-performance computing to conduct fusion efficiency simulations could create a feedback loop?


Original Submission