Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday July 20 2015, @07:33PM   Printer-friendly
from the does-it-run-windows? dept.

Currently, the world's most powerful supercomputers can ramp up to more than a thousand trillion operations per second, or a petaflop. But computing power is not growing as fast as it has in the past. On Monday, the June 2015 listing of the Top 500 most powerful supercomputers in the world revealed the beginnings of a plateau in performance growth.
...
The development rate began tapering off around 2008. Between 2010 and 2013, aggregate increases ranged between 26 percent and 66 percent. And on this June's list, there was a mere 17 percent increase from last November.
...
Despite the slowdown, many computational scientists expect performance to reach exascale, or more than a billion billion operations per second, by 2020.

Hmm, if they reach exascale computing will the weatherman finally be able to predict if it's going to rain this afternoon? Because he sucks at that now.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by VLM on Monday July 20 2015, @08:11PM

    by VLM (445) on Monday July 20 2015, @08:11PM (#211552)

    The article seems to summarize to, if they were willing to pay enough money, which they aren't...

    There was the usual fluff at the end about weather modeling, but more practically speaking "they" might be running out of applications for computation.

    Logic would dictate that eventually you run out of applications for a large enough number of cycles. For an example of car crash finite element modeling: Lets say you can't manufacture a car better than a mm precision, so you grid a car to mm scale, and unless you're expecting detonations theres little point in modeling deformations faster than the transit time of the speed of sound in steel across that mm. The slowest you can move while still causing a "serious" accident sets a total number of frames calculated. Finally no matter how fast you generate results you need the results to be slow enough for humans to make sense of the output and decide "between test run #523 and #524, based on the results of #523, we'll optimize the stabilizer rib position in the hood hinge. Finally there is a finite and low limit to the total number of new car models mankind can release per year. So there exists a finite, although large, maximum number of FLOPs that can be applied to the whole genre of car crash finite element analysis. Its possible that existing supercomputers, while somewhat lower total thruput, none the less for fractional billion/yr and dozens of millions/yr operating expense they've reached a market equilibrium, finally.

    I suppose with particle simulation for physics you likewise run into a theoretical, although very large, maximum number of FLOPs as long as you believe in Planck lengths and Heisenberg's uncertainty principle and stuff like that.

    Hmm what else can you do with high latency FLOPs that isn't a gag or trivial or already mentioned... At this moment I can't think of any use for unconstrained FLOPs and we may very well have found the limit.

    A mechanical analogy would be we rapidly packed more horsepower into locomotive engines, but eventually found a way to max out the technology. Technologically its no great challenge for specially designed engines and track to handle 20000 HP, look at the TGV and stuff like that, but in practice ye olde freight train has little use for more than 1500 or so HP. Usually friction limited (so you have 8 engines on that coal train because there's snow and ice in the mountain pass, not because you need to floor all eight to get it to move down in the valley). Hmm. Cars? My entire life, commuter cars never stray much above 100 HP despite phenomenal growth rates a century or so ago. How about growth in internet companies in the 90s doubling in size every couple months until everyone who's getting internet has internet and then...

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Informative) by mhajicek on Monday July 20 2015, @09:16PM

    by mhajicek (51) on Monday July 20 2015, @09:16PM (#211588)

    Traditionally a lot of the money comes from nuclear warhead simulation. That too has a finite appetite for flops.

    --
    The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
  • (Score: 4, Informative) by takyon on Monday July 20 2015, @09:23PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday July 20 2015, @09:23PM (#211590) Journal

    http://www.ncsa.illinois.edu/ [illinois.edu]
    http://www.livescience.com/6392-9-super-cool-supercomputers.html [livescience.com]
    http://www.information-age.com/industry/hardware/123458374/5-real-life-applications-supercomputers-you-never-knew-about [information-age.com]
    http://www.extremetech.com/extreme/122159-what-can-you-do-with-a-supercomputer/2 [extremetech.com]

    Biotechnology such as protein folding is my favorite application.

    Here's another reason that supercomputing may be slowing down: applications need to adapt to new architectures. Manycore (Intel Xeon Phi) and GPU coprocessing require rewritten code.

    http://www.marketwatch.com/story/chinas-bevy-of-supercomputers-goes-unused-2014-07-15 [marketwatch.com]
    http://www.hpcwire.com/2014/07/17/dd/ [hpcwire.com]

    Lu conceded ground on one point, however – software development – acknowledging that “China is still behind in software, as high-efficiency software development depends on the overall scientific and technological level of the nation.”

    Another critique from MarketWatch’s Laura He goes even further, questioning not just China’s software prowess, but taking aim at the troublingly low utilization rates of its most expensive number-crunchers. The author cites a report from the NewEase Chinese new portal that claims less than 20 percent of China’s supercomputers have been used for scientific research.

    http://www.hpcwire.com/2012/12/12/programming_the_xeon_phi/ [hpcwire.com]

    Farber points out the Phi is essentially an x86 manycore SMP processor and supports the various parallel programming models — OpenMP and MPI, in particular. That means that most applications can get up and running with a recompilation, using Intel’s own developer toolset.

    But according to a previous analysis by Farber, the limited memory capacity on the devices will limit performance for typical OpenMP and MPI applications. According to him, to get performance out of the hardware, you need to make sure you are taking advantage of coprocessor’s many cores and its muscular vector unit. “Massive vector parallelism is the path to realize that high performance,” writes Farber.

    Although there are 60 cores available on the Phi hardware Dr Dobb’s obtained (a pre-production part, apparently), four-way hyperthreading allows for up to 240 threads per chip. During testing it was determined that the application should have at least half of the available threads in use. It is tempting to think that non-vector codes could also benefit from the Xeon Phi, powered by thread parallelism alone, but Farber thinks that such applications will not be performance standouts on this platform.

    Since the Phi is a PCIe device with just a few gigabyte of memory, it’s also important to minimize data transfer back and forth between the CPU’s main memory and local store on the coprocessor card. That means doing as little data shuffling as possible and making sure the coprocessor has enough contiguous work to do using local memory. In fact, Farber maintains the a lot of the design effort to boost performance on the Phi will revolve around minimizing data transfers.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]