Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday July 20 2015, @07:33PM   Printer-friendly
from the does-it-run-windows? dept.

Currently, the world's most powerful supercomputers can ramp up to more than a thousand trillion operations per second, or a petaflop. But computing power is not growing as fast as it has in the past. On Monday, the June 2015 listing of the Top 500 most powerful supercomputers in the world revealed the beginnings of a plateau in performance growth.
...
The development rate began tapering off around 2008. Between 2010 and 2013, aggregate increases ranged between 26 percent and 66 percent. And on this June's list, there was a mere 17 percent increase from last November.
...
Despite the slowdown, many computational scientists expect performance to reach exascale, or more than a billion billion operations per second, by 2020.

Hmm, if they reach exascale computing will the weatherman finally be able to predict if it's going to rain this afternoon? Because he sucks at that now.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by fritsd on Monday July 20 2015, @09:44PM

    by fritsd (4586) on Monday July 20 2015, @09:44PM (#211604) Journal

    Procurement cycles have lengthened. Why replace your 5 petaflops supercomputer with a 20 petaflops supercomputer when you can wait a couple of years longer and get 100-200 petaflops?

    I think that's how it always works. The scientists ask for MORE MONEY for faster supercomputers; the accountants say: "ok, we'll allocate budget for the next 10 years and then you can buy an upgrade". 10 years later, an upgraded or new supercomputer is bought.

    Because there are a lot of research institutes with supercomputers in the world, and because they don't all buy their supercomputers in the same year, you see an aggregate effect, and that's the "Top 500" list.

    I thought there was an enthusiastic fellow at IBM who said that he could simulate cat brains if anyone handed him an exascale supercomputer. I wonder if we'll continue to see "general purpose" supercomputers, or if companies like IBM will start to invent CPUs with an x number of realistic spiky artificial neurons, and scale that up.

    I don't hear much anymore from the EPFL "Blue Brain" [bluebrain.epfl.ch] project; maybe it's stuck in interpersonal arguments or something.

    Just found a website http://electronicvisions.github.io/hbp-sp9-guidebook/index.html [github.io] referring the HBP Neuromorphic Computing Platform, and it mentions centers in Heidelberg and Manchester. So... what about Lausanne? Anybody know?

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Informative) by takyon on Monday July 20 2015, @10:02PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday July 20 2015, @10:02PM (#211612) Journal

    IBM is throwing around lots of ideas. They've got the 7nm demo, they have all-optical chips, they have neuromorphic chips, they have quantum chips. Neuromorphic+quantum are potentially narrow purpose methods of computing.

    "Give me 1 exaflops to simulate a cat brain" doesn't make much sense to me. First, that implies you could potentially simulate any brain with today's resources if you just slowed down the simulation enough (days and months to simulate 1 second)... so maybe the amount of general purpose flops aren't the problem. As "neurons" increase, the amount of synapses increase much more. IBM's TrueNorth chip is a neuromorphic design [ibm.com] that mimics how neurons work.

    They say that a 28nm TrueNorth chip can simulate 1 million neurons and 256 million synapses using 70 milliwatts of power. They created an array of 16, boosting that to 16 million neurons and 4 billion synapses. They have an ambition of integrating 4096 chips on a single rack for 4 billion neurons and 1 trillion synapses, consuming 4 kilowatts of power. That's about 1/25th of a human brain's neurons.

    Shrink the process node, neurons simulated goes up. Stack chips (more feasible due to the lower heat) and neurons simulated goes up. It would consume less than 1% of the energy that big supercomputers use.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by TheLink on Tuesday July 21 2015, @07:33AM

      by TheLink (332) on Tuesday July 21 2015, @07:33AM (#211805) Journal
      I'd like to see them simulate a single white blood cell first to practically 100%. Just because you know the structure of a machine and how it moves doesn't always mean you understand how it works and why it does the stuff it does.

      If they can't even do a white blood cell to 100%, I'm going to laugh at talks of simulating a cat brain.

      I'm pretty sure 90% of the time Stephen Hawking doesn't do that much, so we can probably simulate him to 90%. But the magic is somewhere in the 10% we can't simulate yet ;).