Currently, the world's most powerful supercomputers can ramp up to more than a thousand trillion operations per second, or a petaflop. But computing power is not growing as fast as it has in the past. On Monday, the June 2015 listing of the Top 500 most powerful supercomputers in the world revealed the beginnings of a plateau in performance growth.
...
The development rate began tapering off around 2008. Between 2010 and 2013, aggregate increases ranged between 26 percent and 66 percent. And on this June's list, there was a mere 17 percent increase from last November.
...
Despite the slowdown, many computational scientists expect performance to reach exascale, or more than a billion billion operations per second, by 2020.
Hmm, if they reach exascale computing will the weatherman finally be able to predict if it's going to rain this afternoon? Because he sucks at that now.
(Score: 5, Informative) by takyon on Monday July 20 2015, @07:52PM
Moore's law slowdown and Intel's tick-tock-tock will definitely contribute by delaying the release of 10nm Intel Xeon Phi. On the coprocessor front, GPUs have been stuck on 28nm for years. Procurement cycles have lengthened. Why replace your 5 petaflops supercomputer with a 20 petaflops supercomputer when you can wait a couple of years longer and get 100-200 petaflops?
https://soylentnews.org/article.pl?sid=15/07/13/1118204 [soylentnews.org]
While it's true that it is the entire TOP500 list that is slowing down (such as the speed of the #500 computer on the list), the top position is a red herring. Tianhe-2 was even due to get an upgrade within the next 6 months that has been pushed back [soylentnews.org] to within the next 12 months.
Here's a June 2015 stat: "There are 68 systems with performance greater than 1 petaflop/s on the list, up from 50 last November." That's the number to watch... how many systems deliver no less than 1000 teraflops. In the June 2010 list, it was just 3 systems with an RMAX above 1 petaflops.
None of this is dire news. Less "free lunch" just means you have to work more efficiently with the resources you have. And there are still better chips on the horizon, even below 7nm. Finally, there are a number of newer technologies coming that could disrupt the standard classical CMOS chip, such as quantum [soylentnews.org], neuromorphic [soylentnews.org], and optical [theplatform.net].
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by mhajicek on Monday July 20 2015, @09:11PM
I'm holding out for the tachyon chip that spits out the answer before getting the input.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 2) by BsAtHome on Monday July 20 2015, @10:09PM
That'll be 42 then.
Never needing to to know the question. It is overrated. (can I get my H2GT2G sticker now?)
(Score: 3, Interesting) by fritsd on Monday July 20 2015, @09:44PM
I think that's how it always works. The scientists ask for MORE MONEY for faster supercomputers; the accountants say: "ok, we'll allocate budget for the next 10 years and then you can buy an upgrade". 10 years later, an upgraded or new supercomputer is bought.
Because there are a lot of research institutes with supercomputers in the world, and because they don't all buy their supercomputers in the same year, you see an aggregate effect, and that's the "Top 500" list.
I thought there was an enthusiastic fellow at IBM who said that he could simulate cat brains if anyone handed him an exascale supercomputer. I wonder if we'll continue to see "general purpose" supercomputers, or if companies like IBM will start to invent CPUs with an x number of realistic spiky artificial neurons, and scale that up.
I don't hear much anymore from the EPFL "Blue Brain" [bluebrain.epfl.ch] project; maybe it's stuck in interpersonal arguments or something.
Just found a website http://electronicvisions.github.io/hbp-sp9-guidebook/index.html [github.io] referring the HBP Neuromorphic Computing Platform, and it mentions centers in Heidelberg and Manchester. So... what about Lausanne? Anybody know?
(Score: 4, Informative) by takyon on Monday July 20 2015, @10:02PM
IBM is throwing around lots of ideas. They've got the 7nm demo, they have all-optical chips, they have neuromorphic chips, they have quantum chips. Neuromorphic+quantum are potentially narrow purpose methods of computing.
"Give me 1 exaflops to simulate a cat brain" doesn't make much sense to me. First, that implies you could potentially simulate any brain with today's resources if you just slowed down the simulation enough (days and months to simulate 1 second)... so maybe the amount of general purpose flops aren't the problem. As "neurons" increase, the amount of synapses increase much more. IBM's TrueNorth chip is a neuromorphic design [ibm.com] that mimics how neurons work.
They say that a 28nm TrueNorth chip can simulate 1 million neurons and 256 million synapses using 70 milliwatts of power. They created an array of 16, boosting that to 16 million neurons and 4 billion synapses. They have an ambition of integrating 4096 chips on a single rack for 4 billion neurons and 1 trillion synapses, consuming 4 kilowatts of power. That's about 1/25th of a human brain's neurons.
Shrink the process node, neurons simulated goes up. Stack chips (more feasible due to the lower heat) and neurons simulated goes up. It would consume less than 1% of the energy that big supercomputers use.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by TheLink on Tuesday July 21 2015, @07:33AM
If they can't even do a white blood cell to 100%, I'm going to laugh at talks of simulating a cat brain.
I'm pretty sure 90% of the time Stephen Hawking doesn't do that much, so we can probably simulate him to 90%. But the magic is somewhere in the 10% we can't simulate yet ;).