Slash Boxes

SoylentNews is people

posted by Fnord666 on Wednesday July 15 2020, @06:15PM   Printer-friendly
from the remembering-everything dept.

DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond

We'll start with a brief look at capacity and density, as this is the most-straightforward change to the standard compared to DDR4. Designed to span several years (if not longer), DDR5 will allow for individual memory chips up to 64Gbit in density, which is 4x higher than DDR4's 16Gbit density maximum. Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we'll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.

[...] For DDR5, JEDEC is looking to start things off much more aggressively than usual for a DDR memory specification. Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4.

Of course, sly enthusiasts will note that DDR4 already goes above the official maximum of 3.2Gbps (sometimes well above), and it's likely that DDR5 will eventually go a similar route. The underlying goal, regardless of specific figures, is to double the amount of bandwidth available today from a single DIMM. So don't be too surprised if SK Hynix indeed hits their goal of DDR5-8400 later this decade.

[...] JEDEC is also using the introduction of the DDR5 memory standard to make a fairly important change to how voltage regulation works for DIMMs. In short, voltage regulation is being moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs. This means that DIMMs will now include an integrated voltage regulator, and this goes for everything from UDIMMs to LRDIMMs.

JEDEC is dubbing this "pay as you go" voltage regulation, and is aiming to improve/simplify a few different aspects of DDR5 with it. The most significant change is that by moving voltage regulation on to the DIMMs themselves, voltage regulation is no longer the responsibility of the motherboard. Motherboards in turn will no longer need to be built for the worst-case scenario – such as driving 16 massive LRDIMMs – simplifying motherboard design and reining in costs to a degree. Of course, the flip side of this argument is that it moves those costs over to the DIMM itself, but then system builders are at least only having to buy as much voltage regulation hardware as they have DIMMs, and hence the PAYGO philosophy.

"On-die ECC" is mentioned in the press release and slides. If you can figure out what that means, let us know.

See also: Micron Drives DDR5 Memory Adoption with Technology Enablement Program

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by takyon on Wednesday July 15 2020, @06:17PM (2 children)

    by takyon (881) <{takyon} {at} {}> on Wednesday July 15 2020, @06:17PM (#1022029) Journal

    Comments by TFA author about "on-die ECC":

    So on-die ECC is a bit of a mixed-blessing. To answer the big question in the gallery, on-die ECC is not a replacement for DIMM-wide ECC.

    On-die ECC is to improve the reliability of individual chips. Between the number of bits per chip getting quite high, and newer nodes getting successively harder to develop, the odds of a single-bit error is getting uncomfortably high. So on-die ECC is meant to counter that, by transparently dealing with single-bit errors.

    It's similar in concept to error correction on SSDs (NAND): the error rate is high enough that a modern TLC SSD without error correction would be unusable without it. Otherwise if your chips had to be perfect, these ultra-fine processes would never yield well enough to be usable.

    Consequently, DIMM-wide ECC will still be a thing. Which is why in the JEDEC diagram it shows an LRDIMM with 20 memory packages. That's 10 chips (2 ranks) per channel, with 5 chips per rank. The 5th chip is to provide ECC. Since the channel is narrower, you now need an extra memory chip for every 4 chips rather than every 8 like DDR4.


    And to quote SK Hynix

    "On-die error correction code (ECC)3 and error check and scrub (ECS), which were first to be adopted in DDR5, also allow for more reliable technology node scaling by correcting single bit errors internally. Therefore, it is expected to contribute to further cost reduction in the future. ECS records the DRAM defects and provides the error counts to the host, thereby increasing transparency and enhancing the reliability, availability, and serviceability (RAS) function of the server system." []

    [SIG] 10/28/2017: Soylent Upgrade v14 []
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by Rich on Thursday July 16 2020, @07:10AM (1 child)

    by Rich (945) on Thursday July 16 2020, @07:10AM (#1022321) Journal

    As I understand, the "rowhammer" class of attacks still is a threat on present-day RAM, so some sort of error correction is due anyway. Any insights on this with the DDR5 proposals?

    • (Score: 2) by takyon on Thursday July 16 2020, @09:21AM

      by takyon (881) <{takyon} {at} {}> on Thursday July 16 2020, @09:21AM (#1022340) Journal

      It isn't clear yet.

      From 2018:,38144.html []

      Samsung is one of the manufacturers that have implemented TRR [] in their LPDDR4 and DDR4 RAM modules. JEDEC, the standards body developing the DDR specifications, has not yet made TRR part of the DDR specification (it doesn’t seem to be part of DDR5 [] either), but the specification offers optional hardware support for TRR. The VUSec researchers also believe that TRR coupled with ECC would make it significantly more difficult for attackers to launch Rowhammer attackers against computer systems.

      If it's discussed in the newly released spec, well, they are charging $369 for access to that information.

      The same exact user is whining about a fictional Rowhammer3 on AnandTech and Phoronix comments. But it's not a bad bet to assume that more vulnerabilities will be found.

      [SIG] 10/28/2017: Soylent Upgrade v14 []