Stories
Slash Boxes
Comments

SoylentNews is people

posted by azrael on Monday June 23 2014, @01:16PM   Printer-friendly
from the underwear-on-the-outside dept.

Wired reports that China continues to dominate the high end of the Top500 list of the world's most powerful supercomputers, even as the growth of the computing power on the list seems to be stagnating.

Tianhe-2, run by China's National University of Defense Technology, clocked 33.86 Pflop/s (quadrillions of calculations per second) for the 43rd edition of the Top500, released Monday at the International Supercomputing Conference in Leipzig, Germany. The runner-up in this twice-yearly ranking came in at only half the speed: The U.S. Energy Department's Titan, a Cray XK7 machine at Oak Ridge National Laboratory, tested out at 17.59 Pflop/s.

Related Stories

China's World-Beating Supercomputer Fails to Impress Some Potential Clients: 19 comments

The mainland's billion-yuan supercomputer might be the most powerful in the world, but some researchers say its benefit to them is limited by its high operating cost and a lack of software. Tianhe-2 last week held onto its first-place ranking in the Top 500 charts, which measures the capacity of the world's supercomputers. It performed at a sustained 33.86 petaflops, or quadrillions of calculations per second. But all that muscle is not translating into practical use, some potential clients say.

One problem is cost. The electricity bill for the machine at the Sun Yat-sen University campus in Guangzhou runs between 400,000 yuan (US$ 64,516) and 600,000 yuan (US$ 96,774) a day, which ultimately falls to the user. Another obstacle remains a lack of software, they say. Tianhe-2 has been used for railway design, earthquake simulation, astrophysics and genetic studies. But so far investment has focused on hardware, forcing clients to write the programmes to allow them to use it.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Anonymous Coward on Monday June 23 2014, @01:41PM

    by Anonymous Coward on Monday June 23 2014, @01:41PM (#58987)

    Running more than 470 of the 500 machines, that's quite an achievement.

    • (Score: 0) by Anonymous Coward on Monday June 23 2014, @03:02PM

      by Anonymous Coward on Monday June 23 2014, @03:02PM (#59021)

      There are many Linux kernels. Are the 470 all off-the-shelf kernels I can download from kernel.org, or modified by developers for specific purposes?

      • (Score: 2) by BradTheGeek on Monday June 23 2014, @03:23PM

        by BradTheGeek (450) on Monday June 23 2014, @03:23PM (#59034)

        At least it can be modified for specific purposes (including clustered scientific applications). Try doing that with your Windows Server.

  • (Score: 5, Interesting) by Leebert on Monday June 23 2014, @02:03PM

    by Leebert (3511) on Monday June 23 2014, @02:03PM (#58993)

    "China Wins Supercomputer Race"? I wasn't aware that there was a finish line.

    A more interesting question, to me, is total research compute capacity installed. China might well have the "fastest" single installation, but I'm curious how many large HPC's they have total. It's easy to build the fastest cluster if you dump all of your resources into it. If, however, China is building a similar number of clusters as the US and it's taking spots in the top 10, that's much more interesting.

    • (Score: 0) by Anonymous Coward on Monday June 23 2014, @02:18PM

      by Anonymous Coward on Monday June 23 2014, @02:18PM (#59000)

      Lead dog in the pack for that "race" but they didn't catch the rabbit. Another "race" coming up soon, get your dog ready or get your bets in, or get out of the way!

      Oh for my own desert island with plenty of fish and forage so I can really "get out of the way".

    • (Score: 4, Insightful) by tibman on Monday June 23 2014, @02:21PM

      by tibman (134) Subscriber Badge on Monday June 23 2014, @02:21PM (#59001)

      At this point it seems like the competition should be "Most useful work done with a supercomputer".

      --
      SN won't survive on lurkers alone. Write comments.
      • (Score: 4, Interesting) by Leebert on Monday June 23 2014, @02:35PM

        by Leebert (3511) on Monday June 23 2014, @02:35PM (#59010)

        Absolutely. Having worked in HPC for several years, that's something that always bothered me. There was never a whole lot of "How much compute power do we need to do science X?" analysis done. It was "How much money can we get, and how big of a cluster can we build?" It was always funny to see the amount of energy that was expended to put the system into a configuration to run the Top500 tests, and then RE-configure the system for its ultimate production setup.

        Trememdous dickswinging waste of time, if you ask me. Although I can begrudgingly concede that it does have some positive effects, mostly in driving commitment to spend money. Like the space race. And I will begrudgingly concede that, no matter how much compute capacity they got, there was always something to use it for, so I can somewhat understand not bothering to do too much capacity requirements analysis.

        Still bugged me, though. :)

        • (Score: 3, Interesting) by Angry Jesus on Monday June 23 2014, @04:33PM

          by Angry Jesus (182) on Monday June 23 2014, @04:33PM (#59065)

          > Trememdous dickswinging waste of time, if you ask me.

          I've been in the business for 20+ years, basically since I got out of college. What I've seen is that the Top500 is about politics and marketing. HPC is an industry that is very closely associated with international politics. A lot of that is due to stuff like nuclear weapon simulations and crypto - we've had a lot of export restrictions that are increasingly irrelevant due to cost improvements, non-domestic fabs and architecture changes (it was a lot easier to restrict exports before clustering became the predominate design).

          But don't underestimate politics and marketing, ultimately everything has to pass through approvals by non-technical people and that's the language they speak. Think of it as the bureaucratic equivalent of cool cases and blinkenlights. [u-tokyo.ac.jp]

          • (Score: 2) by Leebert on Monday June 23 2014, @04:50PM

            by Leebert (3511) on Monday June 23 2014, @04:50PM (#59067)

            We might well actually know each other in meatspace. :)

            But yes, you more eloquently stated what I meant about having some degree of positive effect.

            • (Score: 2) by FatPhil on Tuesday June 24 2014, @08:12AM

              by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Tuesday June 24 2014, @08:12AM (#59293) Homepage
              "Also, we like to think of supercomputing as a tool for improving economies, adding to knowledge and increasing prosperity." -- http://www.top500.org/blog/who-are-the-top500-list-countries/

              To me it seems they're putting the cart before the horse. It's a way for those who've got the muscle to wave it around, and for those who wish they had some muscle to pretend. But I don't see how it helps you build the muscle.

              I'm curious, how does the top-500 computers list compare to the top-500 skyscrapers list (if there is such a thing?)

              (And yes, I am aware that I may be making hints towards the unproven skyscraper hypothesis. That was in part deliberate, but does not mean I believe in the truth of that hypothesis. And this disclaimer doesn't mean that I believe the hypothesis is false either.)
              --
              Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
          • (Score: 3, Interesting) by opinionated_science on Monday June 23 2014, @09:14PM

            by opinionated_science (4031) on Monday June 23 2014, @09:14PM (#59143)

            I use supercomputers for research (molecular biophysics). Can clusters do the job? Sure some of them, but they still need to sit in an air-conditioned room to be reliable, and that is a significant amount of the cost.

            The largest machines can run entirely different class of problems that scale smoothly from desktops to the largest machines. In biology there are many canonical problems that could utilize all of everyone of those machines - and the same is true of chemistry and any other scale of physics.

            The politics you mention, is of course a factor, since many of the machines are not generally available and are for spook work. A significant fraction is actually commerical and you never see the numbers (e.g. google, yahoo, facebook etc...).

            But as a researcher, we could have 10^6 the current amount of computing, and it could still be filled to capacity. We could desperately do with an internode latency 1us!!

            Perhaps a better question is, how do be make these machines *easier* to use, as there is a considerable energy barrier to implemented a specific solution!! The "exacloud" of google is not really the solution as it is not tightly coupled - useful ,but not very efficient for many problems.

        • (Score: 2, Interesting) by SrLnclt on Monday June 23 2014, @09:12PM

          by SrLnclt (1473) on Monday June 23 2014, @09:12PM (#59141)
          At least one supercomputer (Blue Waters, run by NCSA on the University of Illinois campus), has opted out [hpcwire.com] of being included in these lists for exactly this reason. No need to waste time/energy/resources to prove how awesome your resources are, just use your resources to actually do stuff.

          Makes you wonder how many others are doing the same as Blue Waters, and where everyone would fall if all supercomputers were actually included in the list.
      • (Score: 0) by Anonymous Coward on Tuesday June 24 2014, @12:58AM

        by Anonymous Coward on Tuesday June 24 2014, @12:58AM (#59185)

        At this point it seems like the competition should be "Most useful work done with a supercomputer".

        Actually, I was thinking that a more useful metric would be number of nodes available for work at any given time. Supercomputers need to be taken off-line periodically for maintenance. If you put all your eggs in one basket, this means that your entire HPC infrastructure is periodically unavailable for use. Thus, as others have so eloquently pointed out, this turns into a mere dick-sizing contest. While it is important to ask what they have managed to accomplish with this big-ass computer, that is a separate issue from raw computing power.

    • (Score: 2) by Alfred on Monday June 23 2014, @02:52PM

      by Alfred (4006) on Monday June 23 2014, @02:52PM (#59017) Journal

      A race without a finish line is really a death march.

    • (Score: 2) by wonkey_monkey on Monday June 23 2014, @03:28PM

      by wonkey_monkey (279) on Monday June 23 2014, @03:28PM (#59038) Homepage

      Was going to post the same thing. Reminds me of this [youtube.com] (though as was often the case the radio version was better).

      --
      systemd is Roko's Basilisk
  • (Score: 3, Funny) by LoRdTAW on Monday June 23 2014, @03:13PM

    by LoRdTAW (3755) on Monday June 23 2014, @03:13PM (#59028) Journal

    Its over. China wins the supercomputer race. I'm throwing my PC into the trash tonight, it will never be as good as Tianhe-2.

    Oh wait there isn't really a race. Just a silly pissing contest that is completely meaningless.

    As another poster said, the real winner is the Linux kernel. Go Linux!

    • (Score: 0) by Anonymous Coward on Monday June 23 2014, @04:53PM

      by Anonymous Coward on Monday June 23 2014, @04:53PM (#59070)

      Awww, poor old doggie, tired of chasing the wabbit too?

    • (Score: 1) by clone141166 on Tuesday June 24 2014, @01:38AM

      by clone141166 (59) on Tuesday June 24 2014, @01:38AM (#59197)

      Yeah, you're mostly right, but having the most compute capacity could be useful for things like breaking encryption.

      I wonder how fast the NSA's off-the-books supercomputer is though?

  • (Score: 2) by Gaaark on Monday June 23 2014, @05:05PM

    by Gaaark (41) on Monday June 23 2014, @05:05PM (#59075) Journal

    What would they achieve if the clustered all 470 of these together?

    Would the answer be 42? Or would it be 24, 'cause it's in Chinese?

    ..... or some other stupid, possibly racist and not at all humorous answer?

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 1) by present_arms on Monday June 23 2014, @06:05PM

      by present_arms (4392) on Monday June 23 2014, @06:05PM (#59103) Homepage Journal

      What would happen is that you would be able to run Vista and Crisis ;) sorry i couldn't resist

      --
      http://trinity.mypclinuxos.com/
    • (Score: 2) by Pslytely Psycho on Monday June 23 2014, @10:50PM

      by Pslytely Psycho (1218) on Monday June 23 2014, @10:50PM (#59159)

      You could run a 1500 part ship in Kerbal Space Program at 12fps.....instead of 1 fpm.

      --
      Alex Jones lawyer inspires new TV series: CSI Moron Division.