Samsung has demonstrated 64 GB RDIMMs using 16 Gb DDR4 memory chips, and plans to make 128 GB and 256 GB modules later this year:
Samsung is demonstrating its 64 GB DDR4 memory module based on 16 Gb chips this week at the OCP U.S. Summit. The 64 GB RDIMM that the company is showcasing is designed for mainstream servers, but ultimately the design will lend itself to build 128 GB and 256 GB memory modules for high-performance servers, the company said.
Samsung's monolithic 16 Gb DDR4 DRAM chips are rated for DDR4-2666 at the industry-standard 1.2 V. The chips are produced using an advanced manufacturing technology, but Samsung does not disclose details at the moment (it is logical to expect Samsung to use its '10-nm-class' tech though). The only thing we do know is that the fabrication process and monolithic die enable 20% lower power consumption of the demonstrated 64 GB RDIMM when compared to a module of the same capacity based on 8 Gb DDR4 chips.
In addition to the new dual-rank 64 GB RDIMM module, Samsung is set to develop quad-ranked 128 GB RDIMMs and octal-ranked 256 GB LRDIMMs. Today's servers running AMD's EPYC or Intel's Xeon Scalable M-suffixed processors feature 12 or 16 memory slots - if the processors were capable of fitting all 256 GB modules, this could lead up to 4 TB per socket. This should be a massive advantage for applications like in-memory databases, virtual desktop infrastructure, and so on.
16 Gb chips may also end up being used in 32 GB memory modules for desktop users.
DDR4 RDIMM and LRDIMM Performance Comparison
Also at Samsung.
Related: Samsung Mass Produces 128 GB DDR4 Server Memory
(Score: 2) by Virindi on Friday March 23 2018, @03:08PM (5 children)
Nice, but I would much rather see more widespread mitigation for rowhammer attacks. The industry seems to be full steam ahead and ignoring what is essentially a design flaw in this type of memory.
Higher density also means more cross-dependency does it not?
(Score: 1) by cocaine overdose on Friday March 23 2018, @06:23PM (3 children)
(Score: 2) by anotherblackhat on Friday March 23 2018, @06:56PM
Maybe, but ECC doesn't fix row hammer.
The fix for row hammer is to ignore the lies told by DRAM manufactures about refresh rates, and refresh more often.
(Score: 2) by forkazoo on Friday March 23 2018, @11:47PM (1 child)
ECC was a state of the art technique in the 1980's. The fact that we haven't advanced far past it even in the most insecure of devices is kind of shocking. To a large extent, I blame Intel for using ECC as a major market segmentation feature that they chose to keep very expensive. We could have all sorts of checksumming and verification hardware as a part of our memory right now if history had gone a bit differently.
(Score: 1) by cocaine overdose on Friday March 23 2018, @11:58PM
(Score: 0) by Anonymous Coward on Saturday March 24 2018, @02:31AM
Samsung and Hynix (and others) have implemented the hardware mitigations into all new DDR4 and LPDDR4 modules. Using both TRR and MAC together has been shown to be 100% effective in preventing rowhammer, as long as proper limits are set and they are actually implemented properly, which tests have shown Samsung has done in their more popular modules (unlike some manufacturers).