Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday July 20 2015, @07:33PM   Printer-friendly
from the does-it-run-windows? dept.

Currently, the world's most powerful supercomputers can ramp up to more than a thousand trillion operations per second, or a petaflop. But computing power is not growing as fast as it has in the past. On Monday, the June 2015 listing of the Top 500 most powerful supercomputers in the world revealed the beginnings of a plateau in performance growth.
...
The development rate began tapering off around 2008. Between 2010 and 2013, aggregate increases ranged between 26 percent and 66 percent. And on this June's list, there was a mere 17 percent increase from last November.
...
Despite the slowdown, many computational scientists expect performance to reach exascale, or more than a billion billion operations per second, by 2020.

Hmm, if they reach exascale computing will the weatherman finally be able to predict if it's going to rain this afternoon? Because he sucks at that now.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by takyon on Monday July 20 2015, @10:02PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday July 20 2015, @10:02PM (#211612) Journal

    IBM is throwing around lots of ideas. They've got the 7nm demo, they have all-optical chips, they have neuromorphic chips, they have quantum chips. Neuromorphic+quantum are potentially narrow purpose methods of computing.

    "Give me 1 exaflops to simulate a cat brain" doesn't make much sense to me. First, that implies you could potentially simulate any brain with today's resources if you just slowed down the simulation enough (days and months to simulate 1 second)... so maybe the amount of general purpose flops aren't the problem. As "neurons" increase, the amount of synapses increase much more. IBM's TrueNorth chip is a neuromorphic design [ibm.com] that mimics how neurons work.

    They say that a 28nm TrueNorth chip can simulate 1 million neurons and 256 million synapses using 70 milliwatts of power. They created an array of 16, boosting that to 16 million neurons and 4 billion synapses. They have an ambition of integrating 4096 chips on a single rack for 4 billion neurons and 1 trillion synapses, consuming 4 kilowatts of power. That's about 1/25th of a human brain's neurons.

    Shrink the process node, neurons simulated goes up. Stack chips (more feasible due to the lower heat) and neurons simulated goes up. It would consume less than 1% of the energy that big supercomputers use.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by TheLink on Tuesday July 21 2015, @07:33AM

    by TheLink (332) on Tuesday July 21 2015, @07:33AM (#211805) Journal
    I'd like to see them simulate a single white blood cell first to practically 100%. Just because you know the structure of a machine and how it moves doesn't always mean you understand how it works and why it does the stuff it does.

    If they can't even do a white blood cell to 100%, I'm going to laugh at talks of simulating a cat brain.

    I'm pretty sure 90% of the time Stephen Hawking doesn't do that much, so we can probably simulate him to 90%. But the magic is somewhere in the 10% we can't simulate yet ;).