Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday July 20 2015, @07:33PM   Printer-friendly
from the does-it-run-windows? dept.

Currently, the world's most powerful supercomputers can ramp up to more than a thousand trillion operations per second, or a petaflop. But computing power is not growing as fast as it has in the past. On Monday, the June 2015 listing of the Top 500 most powerful supercomputers in the world revealed the beginnings of a plateau in performance growth.
...
The development rate began tapering off around 2008. Between 2010 and 2013, aggregate increases ranged between 26 percent and 66 percent. And on this June's list, there was a mere 17 percent increase from last November.
...
Despite the slowdown, many computational scientists expect performance to reach exascale, or more than a billion billion operations per second, by 2020.

Hmm, if they reach exascale computing will the weatherman finally be able to predict if it's going to rain this afternoon? Because he sucks at that now.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Monday July 20 2015, @07:39PM

    by Anonymous Coward on Monday July 20 2015, @07:39PM (#211537)

    A faster computer with shitty models = shitty results faster.

    Even if the models are awesome, for the sake of argument, to be really good at predicting stuff they'd need lots more inputs[like millions of cell phone sensor reports] for the model to get half way decent accuracy. Then the high crunching power could be out to work.

    Weather prediction is hard man.

    • (Score: 2) by bob_super on Monday July 20 2015, @08:02PM

      by bob_super (1357) on Monday July 20 2015, @08:02PM (#211549)

      Considering the mountains in the area, the fact that the prediction at 3 days is more often accurate than not is pretty darn good. The 5-day one is a fairly reliable indicator too.
      But my forecast is biased by the big puddle to the left. When I was in the plains, they usually had the right patterns, but often had to adjust ETAs by 12 hours...

      • (Score: 2) by richtopia on Monday July 20 2015, @08:37PM

        by richtopia (3160) on Monday July 20 2015, @08:37PM (#211561) Homepage Journal

        If you want an accurate weatherman, move.

        Phoenix has a pretty accurate weather forecast.

        • (Score: 3, Funny) by bob_super on Monday July 20 2015, @09:09PM

          by bob_super (1357) on Monday July 20 2015, @09:09PM (#211582)

          Is it expressed in "minutes to fry an egg on your hood while you steam buns in your trunk"?

        • (Score: 2) by edIII on Monday July 20 2015, @10:59PM

          by edIII (791) on Monday July 20 2015, @10:59PM (#211641)

          Phoenix has a pretty accurate weather forecast.

          Lemme guess...... hot, racist, and angry for the next week? Or how many minutes till fatal exposure in a shopping mall parking lot?

          --
          Technically, lunchtime is at any moment. It's just a wave function.
        • (Score: 2) by mmcmonster on Monday July 20 2015, @11:10PM

          by mmcmonster (401) on Monday July 20 2015, @11:10PM (#211647)

          Is it still called weather when it's constant day-to-day all year long?

        • (Score: 2) by Snotnose on Monday July 20 2015, @11:36PM

          by Snotnose (1623) on Monday July 20 2015, @11:36PM (#211657)

          My favorite weather CSB. I live in San Diego. Last Saturday at about 6:10 AM the news showed their Accurate FutureCast!!! predictions showing it would start raining at 9:30. 5 minutes later I got coffee, looked out the window, and it was pouring.

          So Accurate FutureCast!!! can't get it right 5 minutes in the future.

          Ok, in all honesty the weather was pretty messed up last weekend and hard to predict. But 5 minutes vs 3 hours is pretty bad.

          --
          I came. I saw. I forgot why I came.
          • (Score: 1) by Kharnynb on Tuesday July 21 2015, @05:33AM

            by Kharnynb (5468) on Tuesday July 21 2015, @05:33AM (#211777)

            I live in the finnish lake district, they can't predict when or where it will rain with enough accuracy to set a calender by, let alone a watch.

            Then again, i've seen it rain across the street while our side was sunny and dry...large bodies of water really do make predicting the weather hard.

            --
            Build a man a fire, and he'll be warm for a day. Set a man on fire, and he'll be warm for the rest of his life.
        • (Score: 2) by Subsentient on Tuesday July 21 2015, @01:52AM

          by Subsentient (1111) on Tuesday July 21 2015, @01:52AM (#211705) Homepage Journal

          I live outside of Phoenix, and yeah, weather prediction is very accurate here.
          120F for the next 200 days? Check.

          --
          "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
    • (Score: 1, Informative) by Anonymous Coward on Monday July 20 2015, @08:38PM

      by Anonymous Coward on Monday July 20 2015, @08:38PM (#211563)

      Weather predictions are quite accurate out to a day or two. This old canard needs to DIAF.

      If you compare outcomes to predictions over tie then you'll find that a "90% chance of rain" really does mean 9 times out of 10 it rained. But humans don't notice the 9 times it was correct - only the 1 time it was wrong.

      • (Score: 3, Informative) by Beryllium Sphere (r) on Monday July 20 2015, @09:04PM

        by Beryllium Sphere (r) (5062) on Monday July 20 2015, @09:04PM (#211577)

        Data are limited and uncertain, and the system goes through periods when small changes have big results.

        One cool thing meteorologists do is run the same model repeatedly with inputs slightly off from the reported ones. If the output is pretty much the same, they know that they're in an interval where butterfly wings damp out to nothing, and extended forecasts become potentially reliable.

        That said, weather forecasts are remarkable today. Jokes about them are leftovers from the middle of the last century.

        • (Score: 2) by fritsd on Monday July 20 2015, @09:26PM

          by fritsd (4586) on Monday July 20 2015, @09:26PM (#211592) Journal

          Everybody seems to remember chaos theory for Jeff Goldblum's crazy role in "Jurassic Park", but almost nobody seems to remember what it actually means :-(

          • (Score: 3, Insightful) by Tork on Monday July 20 2015, @10:53PM

            by Tork (3914) on Monday July 20 2015, @10:53PM (#211640)
            It meant: "Audience, you need to both believe that the scientists at Jurassic Park are really really smart and that the dinos are going to get out of control and eat people anyway."
            --
            🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 2) by TheRaven on Tuesday July 21 2015, @08:21AM

        by TheRaven (270) on Tuesday July 21 2015, @08:21AM (#211833) Journal
        The problem I have with weather forecasts is that they never provide a confidence. If you look at the satellite maps, then you can often get a pretty good idea of how accurate the forecast is. If you're in a stable patch of atmosphere, they're going to be pretty good. If there's a single front moving across then the times may be off, but they're likely to still be pretty good. If your weather is determined by two or more fronts colliding then all bets are off. The people doing the weather models know to a pretty high degree how accurate their predictions are, but never include this information when they send forecasts out to the public.
        --
        sudo mod me up
        • (Score: 2) by kurenai.tsubasa on Tuesday July 21 2015, @01:50PM

          by kurenai.tsubasa (5227) on Tuesday July 21 2015, @01:50PM (#211905) Journal

          Go to NOAA [noaa.gov] and read the forecast discussion link (example [weather.gov]). Sure, it's not an actual confidence interval like you're probably looking for, but they will throw out hints here and there about how they're interpreting the models and how confident they feel about the published forecast. Be prepared to click on all kinds of obscure abbreviations and acronyms until you get used to the jargon (when they're kind enough to turn the shorthand jargon into a link anyway).

    • (Score: 3, Informative) by VortexCortex on Monday July 20 2015, @09:53PM

      by VortexCortex (4067) on Monday July 20 2015, @09:53PM (#211607)

      If you think it's bad now, just wait for 2016. Our Weather satellites (esp. polar orbiting sats) didn't get funding soon enough and we'll have even shittier data to feed the models, esp. after the west coast GOES13 sat failed. [space.com] The rest of the world made the US the butt of a joke since we can spend trillions on wars to ensure oil is priced in $US -- spending more than NASA's entire budget just on air conditioning troops -- but we didn't spend a few million to ensure NOAA has working weather satellites. [gao.gov] Weather sats scheduled to replace the ones which were nearing end of life went unfunded which creates a "weather satellite gap". [weather.com]

      Protip: Before you start a software inquisition always (read: ALWAYS) check that your input isn't garbage first.

  • (Score: 5, Informative) by takyon on Monday July 20 2015, @07:52PM

    by takyon (881) <{takyon} {at} {soylentnews.org}> on Monday July 20 2015, @07:52PM (#211543) Journal

    Moore's law slowdown and Intel's tick-tock-tock will definitely contribute by delaying the release of 10nm Intel Xeon Phi. On the coprocessor front, GPUs have been stuck on 28nm for years. Procurement cycles have lengthened. Why replace your 5 petaflops supercomputer with a 20 petaflops supercomputer when you can wait a couple of years longer and get 100-200 petaflops?

    https://soylentnews.org/article.pl?sid=15/07/13/1118204 [soylentnews.org]

    The Platform has an analysis of the results [theplatform.net]. Although performance growth is slowing, pre-exascale supercomputers (100+ petaflops) can be expected within the next two to three years. The U.S. Department of Energy's Aurora supercomputer [soylentnews.org] will deliver 180 petaflops of performance in 2018. Around the same time, the Summit supercomputer is expected to reach 150-300 petaflops while Sierra will reach 100+ petaflops [anandtech.com]. ~1 exaflop supercomputers are expected to appear around 2018-2022.

    While it's true that it is the entire TOP500 list that is slowing down (such as the speed of the #500 computer on the list), the top position is a red herring. Tianhe-2 was even due to get an upgrade within the next 6 months that has been pushed back [soylentnews.org] to within the next 12 months.

    Here's a June 2015 stat: "There are 68 systems with performance greater than 1 petaflop/s on the list, up from 50 last November." That's the number to watch... how many systems deliver no less than 1000 teraflops. In the June 2010 list, it was just 3 systems with an RMAX above 1 petaflops.

    None of this is dire news. Less "free lunch" just means you have to work more efficiently with the resources you have. And there are still better chips on the horizon, even below 7nm. Finally, there are a number of newer technologies coming that could disrupt the standard classical CMOS chip, such as quantum [soylentnews.org], neuromorphic [soylentnews.org], and optical [theplatform.net].

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by mhajicek on Monday July 20 2015, @09:11PM

      by mhajicek (51) Subscriber Badge on Monday July 20 2015, @09:11PM (#211585)

      I'm holding out for the tachyon chip that spits out the answer before getting the input.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2) by BsAtHome on Monday July 20 2015, @10:09PM

        by BsAtHome (889) on Monday July 20 2015, @10:09PM (#211616)

        That'll be 42 then.

        Never needing to to know the question. It is overrated. (can I get my H2GT2G sticker now?)

    • (Score: 3, Interesting) by fritsd on Monday July 20 2015, @09:44PM

      by fritsd (4586) on Monday July 20 2015, @09:44PM (#211604) Journal

      Procurement cycles have lengthened. Why replace your 5 petaflops supercomputer with a 20 petaflops supercomputer when you can wait a couple of years longer and get 100-200 petaflops?

      I think that's how it always works. The scientists ask for MORE MONEY for faster supercomputers; the accountants say: "ok, we'll allocate budget for the next 10 years and then you can buy an upgrade". 10 years later, an upgraded or new supercomputer is bought.

      Because there are a lot of research institutes with supercomputers in the world, and because they don't all buy their supercomputers in the same year, you see an aggregate effect, and that's the "Top 500" list.

      I thought there was an enthusiastic fellow at IBM who said that he could simulate cat brains if anyone handed him an exascale supercomputer. I wonder if we'll continue to see "general purpose" supercomputers, or if companies like IBM will start to invent CPUs with an x number of realistic spiky artificial neurons, and scale that up.

      I don't hear much anymore from the EPFL "Blue Brain" [bluebrain.epfl.ch] project; maybe it's stuck in interpersonal arguments or something.

      Just found a website http://electronicvisions.github.io/hbp-sp9-guidebook/index.html [github.io] referring the HBP Neuromorphic Computing Platform, and it mentions centers in Heidelberg and Manchester. So... what about Lausanne? Anybody know?

      • (Score: 4, Informative) by takyon on Monday July 20 2015, @10:02PM

        by takyon (881) <{takyon} {at} {soylentnews.org}> on Monday July 20 2015, @10:02PM (#211612) Journal

        IBM is throwing around lots of ideas. They've got the 7nm demo, they have all-optical chips, they have neuromorphic chips, they have quantum chips. Neuromorphic+quantum are potentially narrow purpose methods of computing.

        "Give me 1 exaflops to simulate a cat brain" doesn't make much sense to me. First, that implies you could potentially simulate any brain with today's resources if you just slowed down the simulation enough (days and months to simulate 1 second)... so maybe the amount of general purpose flops aren't the problem. As "neurons" increase, the amount of synapses increase much more. IBM's TrueNorth chip is a neuromorphic design [ibm.com] that mimics how neurons work.

        They say that a 28nm TrueNorth chip can simulate 1 million neurons and 256 million synapses using 70 milliwatts of power. They created an array of 16, boosting that to 16 million neurons and 4 billion synapses. They have an ambition of integrating 4096 chips on a single rack for 4 billion neurons and 1 trillion synapses, consuming 4 kilowatts of power. That's about 1/25th of a human brain's neurons.

        Shrink the process node, neurons simulated goes up. Stack chips (more feasible due to the lower heat) and neurons simulated goes up. It would consume less than 1% of the energy that big supercomputers use.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by TheLink on Tuesday July 21 2015, @07:33AM

          by TheLink (332) on Tuesday July 21 2015, @07:33AM (#211805) Journal
          I'd like to see them simulate a single white blood cell first to practically 100%. Just because you know the structure of a machine and how it moves doesn't always mean you understand how it works and why it does the stuff it does.

          If they can't even do a white blood cell to 100%, I'm going to laugh at talks of simulating a cat brain.

          I'm pretty sure 90% of the time Stephen Hawking doesn't do that much, so we can probably simulate him to 90%. But the magic is somewhere in the 10% we can't simulate yet ;).
  • (Score: 4, Interesting) by VLM on Monday July 20 2015, @08:11PM

    by VLM (445) on Monday July 20 2015, @08:11PM (#211552)

    The article seems to summarize to, if they were willing to pay enough money, which they aren't...

    There was the usual fluff at the end about weather modeling, but more practically speaking "they" might be running out of applications for computation.

    Logic would dictate that eventually you run out of applications for a large enough number of cycles. For an example of car crash finite element modeling: Lets say you can't manufacture a car better than a mm precision, so you grid a car to mm scale, and unless you're expecting detonations theres little point in modeling deformations faster than the transit time of the speed of sound in steel across that mm. The slowest you can move while still causing a "serious" accident sets a total number of frames calculated. Finally no matter how fast you generate results you need the results to be slow enough for humans to make sense of the output and decide "between test run #523 and #524, based on the results of #523, we'll optimize the stabilizer rib position in the hood hinge. Finally there is a finite and low limit to the total number of new car models mankind can release per year. So there exists a finite, although large, maximum number of FLOPs that can be applied to the whole genre of car crash finite element analysis. Its possible that existing supercomputers, while somewhat lower total thruput, none the less for fractional billion/yr and dozens of millions/yr operating expense they've reached a market equilibrium, finally.

    I suppose with particle simulation for physics you likewise run into a theoretical, although very large, maximum number of FLOPs as long as you believe in Planck lengths and Heisenberg's uncertainty principle and stuff like that.

    Hmm what else can you do with high latency FLOPs that isn't a gag or trivial or already mentioned... At this moment I can't think of any use for unconstrained FLOPs and we may very well have found the limit.

    A mechanical analogy would be we rapidly packed more horsepower into locomotive engines, but eventually found a way to max out the technology. Technologically its no great challenge for specially designed engines and track to handle 20000 HP, look at the TGV and stuff like that, but in practice ye olde freight train has little use for more than 1500 or so HP. Usually friction limited (so you have 8 engines on that coal train because there's snow and ice in the mountain pass, not because you need to floor all eight to get it to move down in the valley). Hmm. Cars? My entire life, commuter cars never stray much above 100 HP despite phenomenal growth rates a century or so ago. How about growth in internet companies in the 90s doubling in size every couple months until everyone who's getting internet has internet and then...

    • (Score: 3, Informative) by mhajicek on Monday July 20 2015, @09:16PM

      by mhajicek (51) Subscriber Badge on Monday July 20 2015, @09:16PM (#211588)

      Traditionally a lot of the money comes from nuclear warhead simulation. That too has a finite appetite for flops.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 4, Informative) by takyon on Monday July 20 2015, @09:23PM

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Monday July 20 2015, @09:23PM (#211590) Journal

      http://www.ncsa.illinois.edu/ [illinois.edu]
      http://www.livescience.com/6392-9-super-cool-supercomputers.html [livescience.com]
      http://www.information-age.com/industry/hardware/123458374/5-real-life-applications-supercomputers-you-never-knew-about [information-age.com]
      http://www.extremetech.com/extreme/122159-what-can-you-do-with-a-supercomputer/2 [extremetech.com]

      Biotechnology such as protein folding is my favorite application.

      Here's another reason that supercomputing may be slowing down: applications need to adapt to new architectures. Manycore (Intel Xeon Phi) and GPU coprocessing require rewritten code.

      http://www.marketwatch.com/story/chinas-bevy-of-supercomputers-goes-unused-2014-07-15 [marketwatch.com]
      http://www.hpcwire.com/2014/07/17/dd/ [hpcwire.com]

      Lu conceded ground on one point, however – software development – acknowledging that “China is still behind in software, as high-efficiency software development depends on the overall scientific and technological level of the nation.”

      Another critique from MarketWatch’s Laura He goes even further, questioning not just China’s software prowess, but taking aim at the troublingly low utilization rates of its most expensive number-crunchers. The author cites a report from the NewEase Chinese new portal that claims less than 20 percent of China’s supercomputers have been used for scientific research.

      http://www.hpcwire.com/2012/12/12/programming_the_xeon_phi/ [hpcwire.com]

      Farber points out the Phi is essentially an x86 manycore SMP processor and supports the various parallel programming models — OpenMP and MPI, in particular. That means that most applications can get up and running with a recompilation, using Intel’s own developer toolset.

      But according to a previous analysis by Farber, the limited memory capacity on the devices will limit performance for typical OpenMP and MPI applications. According to him, to get performance out of the hardware, you need to make sure you are taking advantage of coprocessor’s many cores and its muscular vector unit. “Massive vector parallelism is the path to realize that high performance,” writes Farber.

      Although there are 60 cores available on the Phi hardware Dr Dobb’s obtained (a pre-production part, apparently), four-way hyperthreading allows for up to 240 threads per chip. During testing it was determined that the application should have at least half of the available threads in use. It is tempting to think that non-vector codes could also benefit from the Xeon Phi, powered by thread parallelism alone, but Farber thinks that such applications will not be performance standouts on this platform.

      Since the Phi is a PCIe device with just a few gigabyte of memory, it’s also important to minimize data transfer back and forth between the CPU’s main memory and local store on the coprocessor card. That means doing as little data shuffling as possible and making sure the coprocessor has enough contiguous work to do using local memory. In fact, Farber maintains the a lot of the design effort to boost performance on the Phi will revolve around minimizing data transfers.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by Alfred on Monday July 20 2015, @08:12PM

    by Alfred (4006) on Monday July 20 2015, @08:12PM (#211554) Journal
    Usually I answer "no" to a headline question but to this one I have to answer "money."
    • (Score: 2) by Runaway1956 on Monday July 20 2015, @08:28PM

      by Runaway1956 (2926) Subscriber Badge on Monday July 20 2015, @08:28PM (#211558) Homepage Journal

      You could elaborate just a little bit. "Bang for the buck" would fit better. There's enough money to do it quickly, IF the bigwigs felt the need to do so. But, it's not like mankind's survival depends on another petaflop computer. What we have is "good enough", and the decision makers are allowing some limited investment for tomorrow's needs. Ehhh.

      Besides - until real quantum computers become a reality, there almost certainly is a limit to what can be done.

      Hmmm - maybe it's time for a game of Singularity . . .

      --
      Abortion is the number one killed of children in the United States.
      • (Score: 2) by takyon on Monday July 20 2015, @09:15PM

        by takyon (881) <{takyon} {at} {soylentnews.org}> on Monday July 20 2015, @09:15PM (#211587) Journal

        You are right. They are subject to the realities of chipmaking. 14nm, 10nm, and 7nm chips may boost performance and power efficiency, but they have to wait on the chipmakers. The chipmakers have to wait on ASML Holdings to finish extreme ultraviolet lithography.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by cafebabe on Monday July 20 2015, @08:28PM

    by cafebabe (894) on Monday July 20 2015, @08:28PM (#211557) Journal

    The number of computing clusters is growing but the largest clusters aren't growing particularly fast. This is due to the Chinese adding more nodes to their ludicrously scalable switching fabric when any party gets close to attaining the record.

    --
    1702845791×2
    • (Score: 4, Informative) by takyon on Monday July 20 2015, @09:05PM

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Monday July 20 2015, @09:05PM (#211579) Journal

      It is a myth that supercomputers are built simply to top the LINPACK list. National prestige is a very small part of what they are used for.

      The Chinese would have added nodes sooner [soylentnews.org] if it weren't for the recent and wrongheaded export ban to their supercomputing centers. Now they are going to add Chinese nodes instead. Heck, Intel should sue the feds over this.

      Did we ban exports because we wanted to beat their record? Nope, we did it because Tianhe is used for simulating nuclear weapons and explosions... just as our Department of Energy supercomputers do.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Monday July 20 2015, @11:53PM

        by Anonymous Coward on Monday July 20 2015, @11:53PM (#211664)

        The same story with India. The US banned export of HPC tech to India because India was (still is) chummier with Soviet Ruskitstan. Then India developed their own HPC tech instead of buying American stuff.

      • (Score: 2) by Gravis on Tuesday July 21 2015, @12:31AM

        by Gravis (4596) on Tuesday July 21 2015, @12:31AM (#211683)

        The Chinese would have added nodes sooner if it weren't for the recent and wrongheaded export ban to their supercomputing centers. ... we did it because Tianhe is used for simulating nuclear weapons and explosions... just as our Department of Energy supercomputers do.

        how is trying to slow the advancement of nuclear weaponry wrongheaded? do you think anyone should have more deadly nuclear weapons?

        • (Score: 0) by Anonymous Coward on Tuesday July 21 2015, @12:47AM

          by Anonymous Coward on Tuesday July 21 2015, @12:47AM (#211690)

          That's because you are dumb. Imagine yourself sitting next to North Korea (Pakistan, Israel, etc.)

        • (Score: 3, Informative) by http on Tuesday July 21 2015, @12:49AM

          by http (1920) on Tuesday July 21 2015, @12:49AM (#211691)

          It's not wrong headed, but it also is absolutely not what's intended, or happening. It's a hypocritical excuse, as NRL still operates, and an irrelevant excuse, as China is boosting its domestic manufacture.

          --
          I browse at -1 when I have mod points. It's unsettling.
        • (Score: 2) by TheRaven on Tuesday July 21 2015, @08:35AM

          by TheRaven (270) on Tuesday July 21 2015, @08:35AM (#211841) Journal
          For two reasons. The first is that China has had nuclear weapons since the '60s. Any policy trying to prevent this should include buying big supercomputers for research into building a time machine. Second, because when you're talking about an economy the size of China, all that sanctions do is stimulate local growth. If the Chinese can't buy Intel chips, then they'll produce chips locally - it's not like they have a shortage of chip fabs or processor designers.
          --
          sudo mod me up
        • (Score: 0) by Anonymous Coward on Tuesday July 21 2015, @08:48AM

          by Anonymous Coward on Tuesday July 21 2015, @08:48AM (#211843)

          how is trying to slow the advancement of nuclear weaponry wrongheaded?

          It's not the intent that's wrongheaded, it's the measure. It is wrongheaded in the same way as it is wrongheaded to try to stop a flood by setting the area you want to protect on fire, on the theory that the fire will stop the water.

      • (Score: 2) by cafebabe on Friday July 24 2015, @02:03PM

        by cafebabe (894) on Friday July 24 2015, @02:03PM (#213147) Journal

        It is a myth that supercomputers are built simply to top the LINPACK list.

        As someone who was previously involved with a large renderfarm [soylentnews.org], I confirm that it was upgraded to gigabit Ethernet switches solely to improve benchmarks. When a renderfarm node reads a .rib file and writes a .dpx file every 45 minutes or so, gigabit Ethernet makes minimal difference to performance.

        --
        1702845791×2
    • (Score: 2) by Non Sequor on Monday July 20 2015, @09:40PM

      by Non Sequor (1005) on Monday July 20 2015, @09:40PM (#211601) Journal

      Is there in general an allocation of funding towards flexibility and scalability rather than raw flops?

      I could imagine many organizations getting to the point where they're primarily focused on managing access and maintenance rather than building out the capacity needed for specific projects. Amazon's platform seemed like it was produced as a byproduct of reaching a point like that.

      --
      Write your congressman. Tell him he sucks.
  • (Score: 4, Interesting) by hash14 on Tuesday July 21 2015, @02:15AM

    by hash14 (1102) on Tuesday July 21 2015, @02:15AM (#211712)

    I recall reading that Tianhe in China isn't getting much use because it consumes too much power. Researchers are hesitant to use it because it costs so much to get compute time, so they run it on a slower machine so they can save money while taking a small hit to the amount of time it takes to run.

    I don't know much about building supercomputers, but from vague memories of what I read, it's not as difficult to build a superpowerful computer as it is to make one that's economically reasonable (ie. low power enough that researchers want to use it).

  • (Score: 2) by MichaelDavidCrawford on Tuesday July 21 2015, @03:27AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Tuesday July 21 2015, @03:27AM (#211740) Homepage Journal

    Consider that kids the days are tweeting each other with multicore cpus on mobile devices with gpus.

    At one time one required a supercomputer to do anything in the way of scientific computing. Now one can obtain far more capacity for less than a grand by ordering online from Alibaba.

    While some problems like the weather always benefit from greater capacity there are many that dont.

    --
    Yes I Have No Bananas. [gofundme.com]