Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday March 23 2018, @03:02PM   Printer-friendly
from the I-remember-core-memory dept.

Samsung has demonstrated 64 GB RDIMMs using 16 Gb DDR4 memory chips, and plans to make 128 GB and 256 GB modules later this year:

Samsung is demonstrating its 64 GB DDR4 memory module based on 16 Gb chips this week at the OCP U.S. Summit. The 64 GB RDIMM that the company is showcasing is designed for mainstream servers, but ultimately the design will lend itself to build 128 GB and 256 GB memory modules for high-performance servers, the company said.

Samsung's monolithic 16 Gb DDR4 DRAM chips are rated for DDR4-2666 at the industry-standard 1.2 V. The chips are produced using an advanced manufacturing technology, but Samsung does not disclose details at the moment (it is logical to expect Samsung to use its '10-nm-class' tech though). The only thing we do know is that the fabrication process and monolithic die enable 20% lower power consumption of the demonstrated 64 GB RDIMM when compared to a module of the same capacity based on 8 Gb DDR4 chips.

In addition to the new dual-rank 64 GB RDIMM module, Samsung is set to develop quad-ranked 128 GB RDIMMs and octal-ranked 256 GB LRDIMMs. Today's servers running AMD's EPYC or Intel's Xeon Scalable M-suffixed processors feature 12 or 16 memory slots - if the processors were capable of fitting all 256 GB modules, this could lead up to 4 TB per socket. This should be a massive advantage for applications like in-memory databases, virtual desktop infrastructure, and so on.

16 Gb chips may also end up being used in 32 GB memory modules for desktop users.

DDR4 RDIMM and LRDIMM Performance Comparison

Also at Samsung.

Related: Samsung Mass Produces 128 GB DDR4 Server Memory


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by Virindi on Friday March 23 2018, @03:08PM (5 children)

    by Virindi (3484) on Friday March 23 2018, @03:08PM (#657136)

    Nice, but I would much rather see more widespread mitigation for rowhammer attacks. The industry seems to be full steam ahead and ignoring what is essentially a design flaw in this type of memory.

    Higher density also means more cross-dependency does it not?

    • (Score: 1) by cocaine overdose on Friday March 23 2018, @06:23PM (3 children)

      I believe the consensus among, very curmudgeonly, sysadmins is that ECCs are mandatory if you care at all about your data integrity. You can tighten security by increasing the checksum check frequency. But this is also the price you pay for security. The same price you pay with going RAID or any other hardware utilization that isn't the "norm."
      • (Score: 2) by anotherblackhat on Friday March 23 2018, @06:56PM

        by anotherblackhat (4722) on Friday March 23 2018, @06:56PM (#657218)

        I believe the consensus among, very curmudgeonly, sysadmins is that ECCs are mandatory if you care at all about your data integrity.

        Maybe, but ECC doesn't fix row hammer.
        The fix for row hammer is to ignore the lies told by DRAM manufactures about refresh rates, and refresh more often.

      • (Score: 2) by forkazoo on Friday March 23 2018, @11:47PM (1 child)

        by forkazoo (2561) on Friday March 23 2018, @11:47PM (#657312)

        ECC was a state of the art technique in the 1980's. The fact that we haven't advanced far past it even in the most insecure of devices is kind of shocking. To a large extent, I blame Intel for using ECC as a major market segmentation feature that they chose to keep very expensive. We could have all sorts of checksumming and verification hardware as a part of our memory right now if history had gone a bit differently.

        • (Score: 1) by cocaine overdose on Friday March 23 2018, @11:58PM

          There was a time where prevention could've been swift and simple, that time is gone now. It's a shame that the only people pushing for better tech are mostly braindead (web developers, startup "full stack engineers," etc.) in the simplicity/lightweight/secure/private department. While the people who are pushing for more secure/simple/lightweight/private tech are retard-headed in the useability and feature department. FOSS developers are notorious for putting their licenses above their code quality and UX. Commercial developers are notorious for putting everything under UX.
    • (Score: 0) by Anonymous Coward on Saturday March 24 2018, @02:31AM

      by Anonymous Coward on Saturday March 24 2018, @02:31AM (#657348)

      Samsung and Hynix (and others) have implemented the hardware mitigations into all new DDR4 and LPDDR4 modules. Using both TRR and MAC together has been shown to be 100% effective in preventing rowhammer, as long as proper limits are set and they are actually implemented properly, which tests have shown Samsung has done in their more popular modules (unlike some manufacturers).

(1)