Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 07 2018, @09:51AM   Printer-friendly
from the not-yet-solved dept.

Submitted via IRC for SoyCow3941

Simulating complex systems on supercomputers requires that scientists get hundreds of thousands, even millions of processor cores working together in parallel. Managing cooperation on this scale is no simple task. One challenge is assigning the workload given to each processor core. Unfortunately, complexity isn't distributed evenly across space and time in real-world systems. For example, in biology, a cell nucleus has far more molecules crammed into a small space than the more dilute, watery cytoplasm that surrounds it. Simulating nuclei therefore requires far more computing power and time than modeling other parts. Such situations lead to a mismatch in which some cores are asked to pull more weight than others.

To solve these load imbalances, Christoph Junghans, a staff scientist at the Department of Energy's Los Alamos National Laboratory (LANL), and his colleagues are developing algorithms with many applications across high-performance computing (HPC).

"If you're doing any kind of parallel simulation, and you have a bit of imbalance, all the other cores have to wait for the slowest one," Junghans says, a problem that compounds as the computing system's size grows. "The bigger you go on scale, the more these tiny imbalances matter." On a system like LANL's Trinity supercomputer up to 999,999 cores could idle, waiting on a single one to complete a task.

To work around these imbalances, scientists must devise ways to break apart, or decompose, a problem's most complex components into smaller portions. Multiple processors can then tackle those subdomains.

The work could help researchers move toward using exascale computers that can perform one billion billion calculations per second, or one exaflops, efficiently. Though not yet available, the Department of Energy is developing such machines, which would include 100 times more cores than are found in most current supercomputers. Using a process known as co-design, teams of researchers are seeking ways to devise hardware and software together so that current supercomputers and future exascale systems carry out complex calculations as efficiently as possible. Fixing load imbalance is part and parcel of co-design.

Source:
https://www.hpcwire.com/off-the-wire/los-alamos-scientists-attack-load-balancing-challenge


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Touché) by bob_super on Monday May 07 2018, @05:38PM

    by bob_super (1357) on Monday May 07 2018, @05:38PM (#676700)

    If the article wasn't written at too high a level, how would the internet D-Ks show off their superior knowledge by spouting solutions that nobody with access to a million-cores supercomputer could have possibly though of ?

    Starting Score:    1  point
    Moderation   +1  
       Touché=1, Total=1
    Extra 'Touché' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3