Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday June 20 2017, @03:34PM   Printer-friendly
from the Is-that-a-Cray-in-your-pocket? dept.

A new list was published on top500.org. It might be noteworthy that the NSA, Google, Amazon, Microsoft etc. are not submitting information to this list. Currently, the top two places are occupied by China, with a comfortable 400% head-start in peak-performance and 370% Rmax performance to the 3rd place (Switzerland). US appears on rank 4, Japan on rank 7, and Germany is not in the top ten at all.

All operating systems in the top-10 are Linux and derivates. It seems obvious that, since it is highly optimized hardware, only operating systems are viable which can be fine-tune (so, either open source or with vendor-support for such customizations). Still I would have thought that, since a lot of effort needs to be invested anyway, maybe other systems (BSD?) could be equally suited to the task.

RankSiteSystemCoresRmax (TFlop/s)Rpeak (TFlop/s)Power (kW)
1China: National Supercomputing Center in WuxiSunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway - NRCPC10,649,60093,014.6125,435.915,371
2China: National Super Computer Center in GuangzhouTianhe-2 (MilkyWay-2) - TH-IVB-FEP Cluster, Intel Xeon E5-2692 12C 2.200GHz, TH Express-2, Intel Xeon Phi 31S1P - NUDT3,120,00033,862.754,902.417,808
3Switzerland: Swiss National Supercomputing Centre (CSCS)Piz Daint - Cray XC50, Xeon E5-2690v3 12C 2.6GHz, Aries interconnect , NVIDIA Tesla P100 - Cray Inc.361,76019,590.025,326.32,272
4U.S.: DOE/SC/Oak Ridge National LaboratoryTitan - Cray XK7, Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x - Cray Inc.560,64017,590.027,112.58,209
5U.S.: DOE/NNSA/LLNLSequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom - IBM1,572,86417,173.220,132.77,890
6U.S.: DOE/SC/LBNL/NERSCCori - Cray XC40, Intel Xeon Phi 7250 68C 1.4GHz, Aries interconnect - Cray Inc.622,33614,014.727,880.73,939
7Japan: Joint Center for Advanced High Performance ComputingOakforest-PACS - PRIMERGY CX1640 M1, Intel Xeon Phi 7250 68C 1.4GHz, Intel Omni-Path - Fujitsu556,10413,554.624,913.52,719
8Japan: RIKEN Advanced Institute for Computational Science (AICS)K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect - Fujitsu705,02410,510.011,280.412,660
9U.S.: DOE/SC/Argonne National LaboratoryMira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom - IBM786,4328,586.610,066.33,945
10U.S.: DOE/NNSA/LANL/SNLTrinity - Cray XC40, Xeon E5-2698v3 16C 2.3GHz, Aries interconnect - Cray Inc.301,0568,100.911,078.94,233

takyon: TSUBAME3.0 leads the Green500 list with 14.110 gigaflops per Watt. Piz Daint is #3 on the TOP500 and #6 on the Green500 list, at 10.398 gigaflops per Watt.

According to TOP500, this is only the second time in the history of the list that the U.S. has not secured one of the top 3 positions.

The #100 and #500 positions on June 2017's list have an Rmax of 1.193 petaflops and 432.2 teraflops respectively. Compare to 1.0733 petaflops and 349.3 teraflops for the November 2016 list.

[Update: Historical lists can be found on https://www.top500.org/lists/. There was a time when you only needed 0.4 gigaflops to make the original Top500 list — how do today's mobile phones compare? --martyb]


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by TheRaven on Wednesday June 21 2017, @08:46AM

    by TheRaven (270) on Wednesday June 21 2017, @08:46AM (#528930) Journal
    That doesn't matter much, because most of the kernel drivers for this kind of hardware are trivial: they map a bit of device memory into userspace. Some of them don't even have kernel device drivers, they just map something from /dev/kmem. The OS on these things is there as a program loader and a thing that gets out of the way. Any CPU time spent in the OS is time not spent doing useful work. Individual nodes are typically running one single job and the scheduler is often hacked up to run in a purely cooperative mode. On some of the IBM systems, the Linux part is largely there as an I/O coprocessor running on an anaemic PowerPC core, and all of the real compute runs on the accelerator.

    IBM used to use a proprietary BSD derivative, but switched to Linux because of the brand recognition. Talking to a friend who runs a few of these at Argonne, they're not even particularly interested in clever OpenMP runtimes for the same reason: the job of the OS and OpenMP runtime is to get out of the way while the carefully optimised code runs. If your OpenMP task scheduler is a bit more clever, you'll still probably lose overall from spending more CPU time in it (this may change with more accelerators, if you can designate a CPU core to running profiling and scheduling tasks and run all of the real work on more throughput-optimised cores).

    --
    sudo mod me up
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2