A new list was published on top500.org. It might be noteworthy that the NSA, Google, Amazon, Microsoft etc. are not submitting information to this list. Currently, the top two places are occupied by China, with a comfortable 400% head-start in peak-performance and 370% Rmax performance to the 3rd place (Switzerland). US appears on rank 4, Japan on rank 7, and Germany is not in the top ten at all.
All operating systems in the top-10 are Linux and derivates. It seems obvious that, since it is highly optimized hardware, only operating systems are viable which can be fine-tune (so, either open source or with vendor-support for such customizations). Still I would have thought that, since a lot of effort needs to be invested anyway, maybe other systems (BSD?) could be equally suited to the task.
takyon: TSUBAME3.0 leads the Green500 list with 14.110 gigaflops per Watt. Piz Daint is #3 on the TOP500 and #6 on the Green500 list, at 10.398 gigaflops per Watt.
According to TOP500, this is only the second time in the history of the list that the U.S. has not secured one of the top 3 positions.
The #100 and #500 positions on June 2017's list have an Rmax of 1.193 petaflops and 432.2 teraflops respectively. Compare to 1.0733 petaflops and 349.3 teraflops for the November 2016 list.
[Update: Historical lists can be found on https://www.top500.org/lists/. There was a time when you only needed 0.4 gigaflops to make the original Top500 list — how do today's mobile phones compare? --martyb]
(Score: 2) by takyon on Tuesday June 20 2017, @08:35PM (1 child)
What's your take on the new storage and memory technologies?
Examples:
High Bandwidth Memory
Hybrid Memory Cube
GDDR6/GDDR5X/DDR5
3D QLC NAND
NAND with 64-96 layers
Intel/Micron 3D XPoint (the only significant post-NAND technology to make it to market)
and last but probably least,
helium-filled shingled magnetic recording hard drives (because HAMR is nowhere to be found)
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 1, Informative) by Anonymous Coward on Wednesday June 21 2017, @12:27AM
Memory with a high bandwidth (and a low latency) is becoming more and more critical in HPC. CPU core performance has, for some time now, been increasing faster than memory performance. That is, it's getting harder and harder to keep the cores fed with data. This is often more true for HPC (high-performance computing) workloads than for workloads in many other spaces. Now, there are different technologies which attempt to deliver this bandwidth (don't forget latency) as you point out. I wish I could say more here, but I can't due to NDA concerns. However, perhaps I could paint a very rough picture from publicly available information. HBM ("High Bandwidth Memory") is good stuff compared to DDR, and HBM2+ is better. There are some concerns here, as the HMB stacks use a very wide, parallel bus, and need to be placed very close to the CPU/SoC die. I think we're talking about distances on the order of 1mm or so. There's only so much room close to a CPU/SoC die to place HBM stacks, given this distance requirement. Don't forget that the HBM stacks and the CPU/SoC die probably have to share a (likely silicon) interposer. The nice thing about HMC ("Hybrid Memory Cube"), is that it uses a serial, rather than a parallel interface like HBM. Thus, HMC can be placed further away from the CPU/SoC die(s), and this can increase total memory capacity. The issue here is, you then pay for this extra capacity in terms of latency, as you have to introduce a SerDes step, etc. Also, one needs to think about power consumption: consider the amount of power, on average, that it takes to move one bit of data to/from memory with HBM vs. HMC; think pico Joules per bit. -- I can't speak much to the GDDRX or NAND stuff myself. XPoint sure sounds interesting, but there has been a lot of hype there. I wonder how this will turn out in the near to mid future. Again, keep an eye on power consumption there, this will limit the solution space XPoint can compete in. I don't know much about helium-filled drives either, sorry.