Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday July 30 2015, @09:04PM   Printer-friendly
from the practice-for-zettascale dept.

By executive order, President Obama is asking US Scientists to build the next world's fast computer:

Several government agencies, most notably the Department of Energy, have been deeply involved in the development of supercomputers over the last few decades, but they've typically worked separately. The new initiative will bring together scientists and government agencies such as the Department of Energy, Department of Defense and the National Science Foundation to create a common agenda for pushing the field forward.

The specifics are thin on the ground at the moment. The Department of Energy has already identified the major challenges preventing "exascale" computing today, according to a fact sheet released by the government, but the main goal of the initiative, for now, be to get disparate agencies working together on common goals.

Some have been quick to point out the challenges for accomplishing this feat.

Chief among the obstacles, according to Parsons, is the need to make computer components much more power efficient. Even then, the electricity demands would be gargantuan. "I'd say they're targeting around 60 megawatts, I can't imagine they'll get below that," he commented. "That's at least £60m a year just on your electricity bill."

What other problems do you foresee? Is there anything about today's technology that limits the speed of a supercomputer? What new technologies might make this possible?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by MichaelDavidCrawford on Thursday July 30 2015, @09:47PM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Thursday July 30 2015, @09:47PM (#216038) Homepage Journal

    I have been studying this for some time.

    The two most obvious ways are to reduce cache misses and to encourage rotating media spindown.

    Suppose you have a process that hits the disk once per second. How important is it, that the magnetic media be accessed? Some kinds of caching could enable disk sleep. Userspace refactoring might enable less-frequent I/O.

    Most ISAs have ways to defeat the cache. These are generally quite different for each ISA, either do to patents or hardware design considerations. If you know ahead of time that you will write into an entire cache line then you can instruct the cache control not to read from the L2 cache just before you write your first byte into the line.

    Caches actually slow down some memory access patterns. The supercomputer and embedded coders know all about that but desktop, server and mobile coders commonly dont. Most egregious is is a column-first doubly nested loop.

    Kernel coders know aboutnthis but their capacity for helping user code is limited. One way to make a huge difference would be for the madvise(2) system call's MADV_RANDOM and MADV_DONTNEED advice selectors to have a finer granularity - the affect page table mappings and the page cache but not the L1 cache.

    Some of this can be done transparently on the compiler. gcc and llvm already have an option which is a step in the right direction but not really effective as you dont want to use that option for everything. Compiler intrinsics get you most of the way there but even so you need source code refactoring to get all the way.

    --
    Yes I Have No Bananas. [gofundme.com]
    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4