Stories
Slash Boxes
Comments

SoylentNews is people

posted by azrael on Monday June 23 2014, @01:16PM   Printer-friendly
from the underwear-on-the-outside dept.

Wired reports that China continues to dominate the high end of the Top500 list of the world's most powerful supercomputers, even as the growth of the computing power on the list seems to be stagnating.

Tianhe-2, run by China's National University of Defense Technology, clocked 33.86 Pflop/s (quadrillions of calculations per second) for the 43rd edition of the Top500, released Monday at the International Supercomputing Conference in Leipzig, Germany. The runner-up in this twice-yearly ranking came in at only half the speed: The U.S. Energy Department's Titan, a Cray XK7 machine at Oak Ridge National Laboratory, tested out at 17.59 Pflop/s.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by Leebert on Monday June 23 2014, @02:03PM

    by Leebert (3511) on Monday June 23 2014, @02:03PM (#58993)

    "China Wins Supercomputer Race"? I wasn't aware that there was a finish line.

    A more interesting question, to me, is total research compute capacity installed. China might well have the "fastest" single installation, but I'm curious how many large HPC's they have total. It's easy to build the fastest cluster if you dump all of your resources into it. If, however, China is building a similar number of clusters as the US and it's taking spots in the top 10, that's much more interesting.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=2, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Monday June 23 2014, @02:18PM

    by Anonymous Coward on Monday June 23 2014, @02:18PM (#59000)

    Lead dog in the pack for that "race" but they didn't catch the rabbit. Another "race" coming up soon, get your dog ready or get your bets in, or get out of the way!

    Oh for my own desert island with plenty of fish and forage so I can really "get out of the way".

  • (Score: 4, Insightful) by tibman on Monday June 23 2014, @02:21PM

    by tibman (134) Subscriber Badge on Monday June 23 2014, @02:21PM (#59001)

    At this point it seems like the competition should be "Most useful work done with a supercomputer".

    --
    SN won't survive on lurkers alone. Write comments.
    • (Score: 4, Interesting) by Leebert on Monday June 23 2014, @02:35PM

      by Leebert (3511) on Monday June 23 2014, @02:35PM (#59010)

      Absolutely. Having worked in HPC for several years, that's something that always bothered me. There was never a whole lot of "How much compute power do we need to do science X?" analysis done. It was "How much money can we get, and how big of a cluster can we build?" It was always funny to see the amount of energy that was expended to put the system into a configuration to run the Top500 tests, and then RE-configure the system for its ultimate production setup.

      Trememdous dickswinging waste of time, if you ask me. Although I can begrudgingly concede that it does have some positive effects, mostly in driving commitment to spend money. Like the space race. And I will begrudgingly concede that, no matter how much compute capacity they got, there was always something to use it for, so I can somewhat understand not bothering to do too much capacity requirements analysis.

      Still bugged me, though. :)

      • (Score: 3, Interesting) by Angry Jesus on Monday June 23 2014, @04:33PM

        by Angry Jesus (182) on Monday June 23 2014, @04:33PM (#59065)

        > Trememdous dickswinging waste of time, if you ask me.

        I've been in the business for 20+ years, basically since I got out of college. What I've seen is that the Top500 is about politics and marketing. HPC is an industry that is very closely associated with international politics. A lot of that is due to stuff like nuclear weapon simulations and crypto - we've had a lot of export restrictions that are increasingly irrelevant due to cost improvements, non-domestic fabs and architecture changes (it was a lot easier to restrict exports before clustering became the predominate design).

        But don't underestimate politics and marketing, ultimately everything has to pass through approvals by non-technical people and that's the language they speak. Think of it as the bureaucratic equivalent of cool cases and blinkenlights. [u-tokyo.ac.jp]

        • (Score: 2) by Leebert on Monday June 23 2014, @04:50PM

          by Leebert (3511) on Monday June 23 2014, @04:50PM (#59067)

          We might well actually know each other in meatspace. :)

          But yes, you more eloquently stated what I meant about having some degree of positive effect.

          • (Score: 2) by FatPhil on Tuesday June 24 2014, @08:12AM

            by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday June 24 2014, @08:12AM (#59293) Homepage
            "Also, we like to think of supercomputing as a tool for improving economies, adding to knowledge and increasing prosperity." -- http://www.top500.org/blog/who-are-the-top500-list-countries/

            To me it seems they're putting the cart before the horse. It's a way for those who've got the muscle to wave it around, and for those who wish they had some muscle to pretend. But I don't see how it helps you build the muscle.

            I'm curious, how does the top-500 computers list compare to the top-500 skyscrapers list (if there is such a thing?)

            (And yes, I am aware that I may be making hints towards the unproven skyscraper hypothesis. That was in part deliberate, but does not mean I believe in the truth of that hypothesis. And this disclaimer doesn't mean that I believe the hypothesis is false either.)
            --
            Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 3, Interesting) by opinionated_science on Monday June 23 2014, @09:14PM

          by opinionated_science (4031) on Monday June 23 2014, @09:14PM (#59143)

          I use supercomputers for research (molecular biophysics). Can clusters do the job? Sure some of them, but they still need to sit in an air-conditioned room to be reliable, and that is a significant amount of the cost.

          The largest machines can run entirely different class of problems that scale smoothly from desktops to the largest machines. In biology there are many canonical problems that could utilize all of everyone of those machines - and the same is true of chemistry and any other scale of physics.

          The politics you mention, is of course a factor, since many of the machines are not generally available and are for spook work. A significant fraction is actually commerical and you never see the numbers (e.g. google, yahoo, facebook etc...).

          But as a researcher, we could have 10^6 the current amount of computing, and it could still be filled to capacity. We could desperately do with an internode latency 1us!!

          Perhaps a better question is, how do be make these machines *easier* to use, as there is a considerable energy barrier to implemented a specific solution!! The "exacloud" of google is not really the solution as it is not tightly coupled - useful ,but not very efficient for many problems.

      • (Score: 2, Interesting) by SrLnclt on Monday June 23 2014, @09:12PM

        by SrLnclt (1473) on Monday June 23 2014, @09:12PM (#59141)
        At least one supercomputer (Blue Waters, run by NCSA on the University of Illinois campus), has opted out [hpcwire.com] of being included in these lists for exactly this reason. No need to waste time/energy/resources to prove how awesome your resources are, just use your resources to actually do stuff.

        Makes you wonder how many others are doing the same as Blue Waters, and where everyone would fall if all supercomputers were actually included in the list.
    • (Score: 0) by Anonymous Coward on Tuesday June 24 2014, @12:58AM

      by Anonymous Coward on Tuesday June 24 2014, @12:58AM (#59185)

      At this point it seems like the competition should be "Most useful work done with a supercomputer".

      Actually, I was thinking that a more useful metric would be number of nodes available for work at any given time. Supercomputers need to be taken off-line periodically for maintenance. If you put all your eggs in one basket, this means that your entire HPC infrastructure is periodically unavailable for use. Thus, as others have so eloquently pointed out, this turns into a mere dick-sizing contest. While it is important to ask what they have managed to accomplish with this big-ass computer, that is a separate issue from raw computing power.

  • (Score: 2) by Alfred on Monday June 23 2014, @02:52PM

    by Alfred (4006) on Monday June 23 2014, @02:52PM (#59017) Journal

    A race without a finish line is really a death march.

  • (Score: 2) by wonkey_monkey on Monday June 23 2014, @03:28PM

    by wonkey_monkey (279) on Monday June 23 2014, @03:28PM (#59038) Homepage

    Was going to post the same thing. Reminds me of this [youtube.com] (though as was often the case the radio version was better).

    --
    systemd is Roko's Basilisk