By executive order, President Obama is asking US Scientists to build the next world's fast computer:
Several government agencies, most notably the Department of Energy, have been deeply involved in the development of supercomputers over the last few decades, but they've typically worked separately. The new initiative will bring together scientists and government agencies such as the Department of Energy, Department of Defense and the National Science Foundation to create a common agenda for pushing the field forward.
The specifics are thin on the ground at the moment. The Department of Energy has already identified the major challenges preventing "exascale" computing today, according to a fact sheet released by the government, but the main goal of the initiative, for now, be to get disparate agencies working together on common goals.
Some have been quick to point out the challenges for accomplishing this feat.
Chief among the obstacles, according to Parsons, is the need to make computer components much more power efficient. Even then, the electricity demands would be gargantuan. "I'd say they're targeting around 60 megawatts, I can't imagine they'll get below that," he commented. "That's at least £60m a year just on your electricity bill."
What other problems do you foresee? Is there anything about today's technology that limits the speed of a supercomputer? What new technologies might make this possible?
(Score: 2) by mendax on Thursday July 30 2015, @10:43PM
It may be urban myth but I heard the following as a university undergrad from an old programmer who programmed computers in the 1950's for the military. When the ENIAC was first switched on in 1946, it blacked out Philadelphia because no one thought to call the power company. 18,000 vacuum tubes draw a lot of power! The Wikipedia says its power consumption was 150 kilowatts. Nothing like 60 megawatts, of course, but respectable for its day.
It's really quite a simple choice: Life, Death, or Los Angeles.
(Score: 4, Interesting) by takyon on Thursday July 30 2015, @10:59PM
I doubt any country will want to build a 60 MW exascale supercomputer, even if there are bragging rights to be had.
The target is 20-25 MW, which means efficiency needs to increase from 3-5 GFLOPS/W to 40-50 GFLOPS/W.
Some processors appear to do a lot of GFLOPS/w [streamcomputing.eu], for example each new NVIDIA GPU architecture, but don't translate as well to double-precision floating point performance.
The most efficient supercomputer on the Nov. 2015 Green500 list [green500.org] does 5.271 GFLOPS/W. Piz Daint does 3.185 GFLOPS/W at #9, but is also at #6 on the TOP500 list with 6.271 petaflops. The new Green500 list is late; I'll do a story when it is released.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 4, Informative) by bob_super on Thursday July 30 2015, @11:28PM
If you know exactly which algorithm you're going to use today, by far the best bang for your Watt is with FPGA offload (Xilinx SDAccel, and whatever Intel cooks out of Altera parts by 2020). But I don't think anyone has used FPGAs nodes at that scale yet.