Wired reports that China continues to dominate the high end of the Top500 list of the world's most powerful supercomputers, even as the growth of the computing power on the list seems to be stagnating.
Tianhe-2, run by China's National University of Defense Technology, clocked 33.86 Pflop/s (quadrillions of calculations per second) for the 43rd edition of the Top500, released Monday at the International Supercomputing Conference in Leipzig, Germany. The runner-up in this twice-yearly ranking came in at only half the speed: The U.S. Energy Department's Titan, a Cray XK7 machine at Oak Ridge National Laboratory, tested out at 17.59 Pflop/s.
(Score: 5, Interesting) by Leebert on Monday June 23 2014, @02:03PM
"China Wins Supercomputer Race"? I wasn't aware that there was a finish line.
A more interesting question, to me, is total research compute capacity installed. China might well have the "fastest" single installation, but I'm curious how many large HPC's they have total. It's easy to build the fastest cluster if you dump all of your resources into it. If, however, China is building a similar number of clusters as the US and it's taking spots in the top 10, that's much more interesting.
(Score: 0) by Anonymous Coward on Monday June 23 2014, @02:18PM
Lead dog in the pack for that "race" but they didn't catch the rabbit. Another "race" coming up soon, get your dog ready or get your bets in, or get out of the way!
Oh for my own desert island with plenty of fish and forage so I can really "get out of the way".
(Score: 4, Insightful) by tibman on Monday June 23 2014, @02:21PM
At this point it seems like the competition should be "Most useful work done with a supercomputer".
SN won't survive on lurkers alone. Write comments.
(Score: 4, Interesting) by Leebert on Monday June 23 2014, @02:35PM
Absolutely. Having worked in HPC for several years, that's something that always bothered me. There was never a whole lot of "How much compute power do we need to do science X?" analysis done. It was "How much money can we get, and how big of a cluster can we build?" It was always funny to see the amount of energy that was expended to put the system into a configuration to run the Top500 tests, and then RE-configure the system for its ultimate production setup.
Trememdous dickswinging waste of time, if you ask me. Although I can begrudgingly concede that it does have some positive effects, mostly in driving commitment to spend money. Like the space race. And I will begrudgingly concede that, no matter how much compute capacity they got, there was always something to use it for, so I can somewhat understand not bothering to do too much capacity requirements analysis.
Still bugged me, though. :)
(Score: 3, Interesting) by Angry Jesus on Monday June 23 2014, @04:33PM
> Trememdous dickswinging waste of time, if you ask me.
I've been in the business for 20+ years, basically since I got out of college. What I've seen is that the Top500 is about politics and marketing. HPC is an industry that is very closely associated with international politics. A lot of that is due to stuff like nuclear weapon simulations and crypto - we've had a lot of export restrictions that are increasingly irrelevant due to cost improvements, non-domestic fabs and architecture changes (it was a lot easier to restrict exports before clustering became the predominate design).
But don't underestimate politics and marketing, ultimately everything has to pass through approvals by non-technical people and that's the language they speak. Think of it as the bureaucratic equivalent of cool cases and blinkenlights. [u-tokyo.ac.jp]
(Score: 2) by Leebert on Monday June 23 2014, @04:50PM
We might well actually know each other in meatspace. :)
But yes, you more eloquently stated what I meant about having some degree of positive effect.
(Score: 2) by FatPhil on Tuesday June 24 2014, @08:12AM
To me it seems they're putting the cart before the horse. It's a way for those who've got the muscle to wave it around, and for those who wish they had some muscle to pretend. But I don't see how it helps you build the muscle.
I'm curious, how does the top-500 computers list compare to the top-500 skyscrapers list (if there is such a thing?)
(And yes, I am aware that I may be making hints towards the unproven skyscraper hypothesis. That was in part deliberate, but does not mean I believe in the truth of that hypothesis. And this disclaimer doesn't mean that I believe the hypothesis is false either.)
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 3, Interesting) by opinionated_science on Monday June 23 2014, @09:14PM
I use supercomputers for research (molecular biophysics). Can clusters do the job? Sure some of them, but they still need to sit in an air-conditioned room to be reliable, and that is a significant amount of the cost.
The largest machines can run entirely different class of problems that scale smoothly from desktops to the largest machines. In biology there are many canonical problems that could utilize all of everyone of those machines - and the same is true of chemistry and any other scale of physics.
The politics you mention, is of course a factor, since many of the machines are not generally available and are for spook work. A significant fraction is actually commerical and you never see the numbers (e.g. google, yahoo, facebook etc...).
But as a researcher, we could have 10^6 the current amount of computing, and it could still be filled to capacity. We could desperately do with an internode latency 1us!!
Perhaps a better question is, how do be make these machines *easier* to use, as there is a considerable energy barrier to implemented a specific solution!! The "exacloud" of google is not really the solution as it is not tightly coupled - useful ,but not very efficient for many problems.
(Score: 2, Interesting) by SrLnclt on Monday June 23 2014, @09:12PM
Makes you wonder how many others are doing the same as Blue Waters, and where everyone would fall if all supercomputers were actually included in the list.
(Score: 0) by Anonymous Coward on Tuesday June 24 2014, @12:58AM
Actually, I was thinking that a more useful metric would be number of nodes available for work at any given time. Supercomputers need to be taken off-line periodically for maintenance. If you put all your eggs in one basket, this means that your entire HPC infrastructure is periodically unavailable for use. Thus, as others have so eloquently pointed out, this turns into a mere dick-sizing contest. While it is important to ask what they have managed to accomplish with this big-ass computer, that is a separate issue from raw computing power.
(Score: 2) by Alfred on Monday June 23 2014, @02:52PM
A race without a finish line is really a death march.
(Score: 2) by wonkey_monkey on Monday June 23 2014, @03:28PM
Was going to post the same thing. Reminds me of this [youtube.com] (though as was often the case the radio version was better).
systemd is Roko's Basilisk