Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Sunday August 02 2015, @01:20PM   Printer-friendly
from the binary-tree-hugging dept.

The June 2015 edition of the Green500 supercomputer list is finally out, and the top system, Shoubu at the Institute of Physical and Chemical Research (RIKEN) in Japan, has surpassed the 7 gigaflops per watt milestone. The following two systems surpassed 6 GFLOPS/W, and the current #4 system led the November 2014 list at 5.272 GFLOPS/W.

Shoubu is ranked #160 on the June 2015 edition of the TOP500 list, with an RMAX of 412.7 teraflops. Green500 reports its efficiency at 7,031.58 MFLOPS/W with a power consumption of 50.32 kW. The supercomputer uses Intel Xeon E5-2618Lv3 Haswell CPUs, "new many-core accelerators from PEZY-SC," and the InfiniBand data interconnect. The top 32 systems on the new Green500 list are heterogeneous, using GPU and "many-core" accelerators from the likes of AMD, Intel, NVIDIA, and PEZY Computing. The PEZY-SC accelerator used in the top 3 systems reportedly delivers 1.5 teraflops of double-precision floating-point performance using 1024 cores built on a 28nm process, while consuming just 90 W.

Green500 notes Japanese dominance in the supercomputer efficiency rankings. Aside from Shoubu at RIKEN, the #2 and #3 systems are located at the High Energy Accelerator Research Organization (KEK) in Tsukuba, Ibaraki, Japan. Eight of the top twenty systems on the newest Green500 list are located in Japan.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by takyon on Sunday August 02 2015, @05:50PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday August 02 2015, @05:50PM (#217046) Journal

    The sixty megawatt number was one interviewee's guess at how much power a 1 exaflops supercomputer might consume. As I wrote in my comment [soylentnews.org], it isn't a very realistic target.

    20-25 MW is the actual target. 50-60 MW is worst case scenario territory. If some nation makes a 50-60 MW supercomputer just to get to 1 exaflops first, then blame it on bragging rights, because that's terribly high.

    To get a 25 megawatt 1 exaflops supercomputer, you need efficiency of 40 gigaflops per watt. We were at 5.271 GFLOPS/W, now we are at 7.032 GFLOPS/W. The Green500 is turning out to be a much more interesting list than TOP500. Now we need about a further 6-fold improvement in total system efficiency to meet the exascale targets. These efficient systems can scale, as Piz Daint, now #13 on the Green500, shows.

    About China: they plan to upgrade Tianhe-2 (power consumption currently at 17.8 megawatts) with a "homegrown accelerator" [soylentnews.org]. The upgrade will tack on another megawatt of power consumption but could increase peak performance to 100 petaflops.

    There is also the possibility for complete disruption of performance/power scaling. Here's the company to watch [optalysys.com].

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Informative) by AnonTechie on Sunday August 02 2015, @07:30PM

    by AnonTechie (2275) on Sunday August 02 2015, @07:30PM (#217072) Journal

    Related to this topic: Computing at full capacity: http://phys.org/news/2015-08-full-capacity.html [phys.org]

    Over 12 million servers in 3 million data centers in the U.S. burn about 100 billion kilowatt-hours of electricity every year. Billions of dollars are spent on data center energy every year, with billions more spent on power distribution and cooling infrastructures. Even with the magnitude of these numbers, energy and cooling represent only about 20 percent of the typical total cost of ownership of data centers, which is typically dominated by server hardware (about 40 percent) and software (about 25 percent) costs. Additional costs, including storage, networking, and information technology labor, further swell the price tag.

    [Source]: http://newsoffice.mit.edu/2015/jisto-computing-at-full-capacity-0731 [mit.edu]

    --
    Albert Einstein - "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
  • (Score: 0) by Anonymous Coward on Sunday August 02 2015, @09:19PM

    by Anonymous Coward on Sunday August 02 2015, @09:19PM (#217108)

    omg! thanks a bunch! this video is really cool : ) http://optalysys.com/technology/watch-video/ [optalysys.com]

    • (Score: 2) by takyon on Sunday August 02 2015, @10:14PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday August 02 2015, @10:14PM (#217128) Journal

      http://www.hpcwire.com/2014/08/06/exascale-breakthrough-weve-waiting/ [hpcwire.com]
      http://www.hpcwire.com/off-the-wire/tgac-and-optalysys-collaborate-to-reduce-hpc-energy-consumption/ [hpcwire.com]
      http://www.theplatform.net/2015/03/25/a-light-approach-to-genomics-with-optical-processors/ [theplatform.net]

      It's early but it could become a big deal... as long as it's not vaporware. It's also unclear whether this is a coprocessor, can work on all or at least many problems, and could be useful in anyway for home/gaming users. If it delivers a few 95% power reductions, people will start taking notice.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 1, Informative) by Anonymous Coward on Sunday August 02 2015, @11:47PM

      by Anonymous Coward on Sunday August 02 2015, @11:47PM (#217155)

      Video is a 13 months old as of this writing. More interestingly, and no one here brought up nor in the comments of the video itself, is that this appears, at least to me, to be an optical ANALOG computer:

      https://en.wikipedia.org/wiki/Analog_computer [wikipedia.org]

      The give-away in the video the explanation of how it works, translating digital data into analog waveforms riding the light, running it through some optics, and then turning it back to digital signals again--no quantum-anything, nor does this seem to be a digital photonic computer with optical transistors and the like. The huge difference from previous analog computers, besides the optics, is that the whole thing is on a single die, if I'm understanding right.

      Since the (vast?) majority of supercomputer simulations involve analog phenomena, particularly fluidics, going to analog computers makes sense; I could possibly see this not working down at the quantum level, though, but that's just a guess from this armchair quarterback.

      I also remember years ago I came across a Russian processor manufacturer's web site talking about how they never gave up on analog computers "unlike the West" and their computers could outperform Western ones, at least at certain things. I never saw it after that, nor saw it mentioned anywhere else, but I find it interesting that what is old is now new again under a different name, yet again.