Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by Fnord666 on Wednesday July 15 2020, @06:15PM   Printer-friendly
from the remembering-everything dept.

DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond

We'll start with a brief look at capacity and density, as this is the most-straightforward change to the standard compared to DDR4. Designed to span several years (if not longer), DDR5 will allow for individual memory chips up to 64Gbit in density, which is 4x higher than DDR4's 16Gbit density maximum. Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we'll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.

[...] For DDR5, JEDEC is looking to start things off much more aggressively than usual for a DDR memory specification. Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4.

Of course, sly enthusiasts will note that DDR4 already goes above the official maximum of 3.2Gbps (sometimes well above), and it's likely that DDR5 will eventually go a similar route. The underlying goal, regardless of specific figures, is to double the amount of bandwidth available today from a single DIMM. So don't be too surprised if SK Hynix indeed hits their goal of DDR5-8400 later this decade.

[...] JEDEC is also using the introduction of the DDR5 memory standard to make a fairly important change to how voltage regulation works for DIMMs. In short, voltage regulation is being moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs. This means that DIMMs will now include an integrated voltage regulator, and this goes for everything from UDIMMs to LRDIMMs.

JEDEC is dubbing this "pay as you go" voltage regulation, and is aiming to improve/simplify a few different aspects of DDR5 with it. The most significant change is that by moving voltage regulation on to the DIMMs themselves, voltage regulation is no longer the responsibility of the motherboard. Motherboards in turn will no longer need to be built for the worst-case scenario – such as driving 16 massive LRDIMMs – simplifying motherboard design and reining in costs to a degree. Of course, the flip side of this argument is that it moves those costs over to the DIMM itself, but then system builders are at least only having to buy as much voltage regulation hardware as they have DIMMs, and hence the PAYGO philosophy.

"On-die ECC" is mentioned in the press release and slides. If you can figure out what that means, let us know.

See also: Micron Drives DDR5 Memory Adoption with Technology Enablement Program

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More


Original Submission

Related Stories

DDR5 Standard to be Finalized by JEDEC in 2018 13 comments

JEDEC has announced that it expects to finalize the DDR5 standard by next year. It says that DDR5 will double bandwidth and density, and increase power efficiency, presumably by lowering the operating voltages again (perhaps to 1.1 V). Availability of DDR5 modules is expected by 2020:

You may have just upgraded your computer to use DDR4 recently or you may still be using DDR3, but in either case, nothing stays new forever. JEDEC, the organization in charge of defining new standards for computer memory, says that it will be demoing the next-generation DDR5 standard in June of this year and finalizing the standard sometime in 2018. DDR5 promises double the memory bandwidth and density of DDR4, and JEDEC says it will also be more power-efficient, though the organization didn't release any specific numbers or targets.

The DDR4 SDRAM specification was finalized in 2012, and DDR3 in 2007, so DDR5's arrival is to be expected (cue the Soylentils still using DDR2). One way to double the memory bandwidth of DDR5 is to double the DRAM prefetch to 16n, matching GDDR5X.

Graphics cards are beginning to ship with GDDR5X. Some graphics cards and Knights Landing Xeon Phi chips include High Bandwidth Memory (HBM). A third generation of HBM will offer increased memory bandwidth, density, and more than 8 dies in a stack. Samsung has also talked about a cheaper version of HBM for consumers with a lower total bandwidth. SPARC64 XIfx chips include Hybrid Memory Cube. GDDR6 SDRAM could raise per-pin bandwidth to 14 Gbps, from the 10-14 Gbps of GDDR5X, while lowering power consumption.


Original Submission

DDR5-4400 Test Chip Demonstrated 16 comments

Cadence and Micron Demo DDR5-4400 IMC and Memory, Due in 2019

Cadence this week introduced the industry's first IP interface in silicon for the current provisional DDR5 specification developed by JEDEC. Cadence's IP and test chip [are] fabricated using TSMC's 7 nm process technology, and is designed to enable SoC developers to begin on their DDR5 memory subsystems now and get them to market in 2019-2020, depending on high-volume DDR5 availability. At a special event, Cadence teamed up with Micron to demonstrate their DDR5 DRAM subsystem. In the meantime, Micron has started to sample its preliminary DDR5 chips to interested parties.

Cadence's DDR5 memory controller and PHY achieve a 4400 MT/s data rate with CL42 using Micron's prototype 8 Gb DDR5 memory chips. Compared to DDR4 today, the supply voltage of DDR5 is dropped from 1.2 volts to 1.1 volts, with an allowable fluctuation range of only ±0.033 V. In this case, the specifications mean that an 8 Gb DDR5 DRAM chip can hit a considerably higher I/O speed than an 8 Gb commercial DDR4 IC today at a ~9% lower voltage. JEDEC plans that eventually the DDR5 interface will get to 6400 MT/s, but Cadence says that initial DDR5 memory ICs will support ~4400 MT/s data rates. This will be akin to DDR4 rising from DDR4-2133 at initial launch to DDR4-3200 today. Cadence's DDR5 demo video can be watched here.

Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019 8 comments

Cadence & Micron DDR5 Update: 16 Gb Chips on Track for 2019

Earlier this year Cadence and Micron performed the industry's first public demonstration of next-generation DDR5 memory. At a TSMC event earlier this month the two companies provided some updates concerning development of the new memory technology. As it appears, the spec has not been finalized at JEDEC yet, but Micron still expects to start production of DDR5 memory chips in late 2019.

As noted back in May, the primary feature of DDR5 SDRAM is capacity of chips, not just a higher performance and a lower power consumption. DDR5 is expected to bring in I/O speeds of 4266 to 6400 MT/s, with a supply voltage drop to 1.1 V and an allowable fluctuation range of 3% (i.e., at ±0.033V). It is also expected to use two independent 32/40-bit channels per module (without/or with ECC). Furthermore, DDR5 will have an improved command bus efficiency (because the channels will have their own 7-bit Address (Add)/Command (Cmd) buses), better refresh schemes, and an increased bank group for additional performance. In fact, Cadence goes as far as saying that improved functionality of DDR5 will enable a 36% higher real-world bandwidth when compared to DDR4 even at 3200 MT/s (this claim will have to be put to a test) and once 4800 MT/s speed kicks in, the actual bandwidth will be 87% higher when compared to DDR4-3200. In the meantime, one of the most important features of DDR5 will be monolithic chip density beyond 16 Gb.

Leading DRAM makers already have monolithic DDR4 chips featuring a 16 Gb capacity, but those devices cannot offer extreme clocks or I/O speeds because of laws of physics. Therefore, companies like Micron have a lot of work to do in a bid to bring together high DRAM densities and performance in the DDR5 era. In particular, Micron is concerned about variable retention time, and other atomic level occurrences, once production technologies used for DRAM reach 10 – 12 nm. Meanwhile, the DDR5 Add/Cmd bus already features on-die termination to make signals cleaner and to improve stability at high data rates. Furthermore, high-end DDR5 DIMMs will have their own voltage regulators and PMICs. Long story short, while the DDR5 standard is tailored to wed performance and densities, there is still a lot of magic to be done by DRAM manufacturers.

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated


Original Submission

SK Hynix Announces Plans for DDR5-8400 Memory, and More 6 comments

SK Hynix: Up to DDR5-8400 at 1.1 Volts

Back in November last year, we reported that SK Hynix had developed and deployed its first DDR5 DRAM. Fast forward to the present, and we also know SK Hynix has recently been working on its DDR5-6400 DRAM, but today the company has showcased that it has plans to offer up to DDR5-8400, with on-die ECC, and an operating voltage of just 1.1 Volts.

WIth CPU core counts rising with the fierce battle ongoing between Intel and AMD in the desktop, professional, and now mobile markets, the demand to increase throughput performance is high on the agenda. Memory bandwidth by comparison has not been increasing as much, and at some level the beast needs to be fed. Announcing more technical details on its official website, SK Hynix has been working diligently on perfecting its DDR5 chips with capacity for up to 64 Gb per chip.

Micron will begin selling High Bandwidth Memory (HBM) this year, entering the market alongside Samsung and SK Hynix and potentially lowering prices:

Bundled in their latest earnings call, Micron has revealed that later this year the company will finally introduce its first HBM DRAM for bandwidth-hungry applications. The move will enable the company to address the market for high-bandwidth devices such as flagship GPUs and network processors, which in the last five years have turned to HBM to meet their ever-growing bandwidth needs. And as the third and final of the "big three" memory manufacturers to enter the HBM market, this means that HBM2 memory will finally be available from all three companies, introducing a new wrinkle of competition into that market.

Also at Wccftech.

See also: Cadence DDR5 Update: Launching at 4800 MT/s, Over 12 DDR5 SoCs in Development


Original Submission

SK Hynix Ready to Ship 16 Gb DDR5 Dies, Has Its Own 64 GB DDR5-4800 Modules 7 comments

DDR5 is Coming: First 64GB DDR5-4800 Modules from SK Hynix

DDR5 is the next stage of platform memory for use in the majority of major compute platforms. The specification (as released in July 2020) brings the main voltage down from 1.2 V to 1.1 V, increases the maximum silicon die density by a factor 4, doubles the maximum data rate, doubles the burst length, and doubles the number of bank groups. Simply put, the JEDEC DDR specifications allows for a 128 GB unbuffered module running at DDR5-6400. RDIMMs and LRDIMMs should be able to go much higher, power permitting.

[...] SK Hynix's announcement today is that they are ready to start shipping DDR5 ECC memory to module manufacturers – specifically 16 gigabit dies built on its 1Ynm process that support DDR5-4800 to DDR5-5600 at 1.1 volts. With the right packaging technology (such as 3D TSV), SK Hynix says that partners can build 256 GB LRDIMMs. Additional binning of the chips for better-than-JEDEC speeds will have to be done by the module manufacturers themselves. SK Hynix also appears to have its own modules, specifically 32GB and 64GB RDIMMs at DDR5-4800, and has previously promised to offer memory up to DDR5-8400.

[...] As part of the announcement, it was interesting to see Intel as one of the lead partners for these modules. Intel has committed to enabling DDR5 on its Sapphire Rapids Xeon processor platform, due for initial launch in late 2021/2022. AMD was not mentioned with the announcement, and neither were any Arm partners.

SK Hynix quotes that DDR5 is expected to be 10% of the global market in 2021, increasing to 43% in 2024. The intersection point for consumer platforms is somewhat blurred at this point, as we're probably only half-way through (or less than half) of the DDR4 cycle. Traditionally we expect a cost interception between old and new technology when they are equal in market share, however the additional costs in voltage regulation that DDR5 requires is likely to drive up module costs – scaling from standard power delivery on JEDEC modules up to a beefier solution on the overclocked modules. It should however make motherboards cheaper in that regard.

See also: Insights into DDR5 Sub-timings and Latencies

Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More
JEDEC Releases DDR5 Memory Specification


Original Submission

Samsung's 512GB DDR5 Module is a Showcase for the Future of RAM 38 comments

Samsung's 512GB DDR5 module is a showcase for the future of RAM:

Samsung has unveiled a new RAM module that shows the potential of DDR5 memory in terms of speed and capacity. The 512GB DDR5 module is the first to use High-K Metal Gate (HKMG) tech, delivering 7,200 Mbps speeds — over double that of DDR4, Samsung said. Right now, it's aimed at data-hungry supercomputing, AI and machine learning functions, but DDR5 will eventually find its way to regular PCs, boosting gaming and other applications.

[...] With 7,200 Mbps speeds, Samsung's latest module would deliver around 57.6 GB/s transfer speeds on a single channel. In Samsung's press release, Intel noted that the memory would be compatible with its next-gen "Sapphire Rapids" Xeon Scalable processors. That architecture will use an eight-channel DDR5 memory controller, so we could see multi-terabyte memory configurations with memory transfer speeds as high as 460 GB/s. Meanwhile, the first consumer PCs could arrive in 2022 when AMD unveils its Zen 4 platform, which is rumored to support DDR5.

Previously:
SK Hynix Ready to Ship 16 Gb DDR5 Dies, Has Its Own 64 GB DDR5-4800 Modules
JEDEC Releases DDR5 Memory Specification
SK Hynix Announces Plans for DDR5-8400 Memory, and More


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Informative) by takyon on Wednesday July 15 2020, @06:17PM (2 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday July 15 2020, @06:17PM (#1022029) Journal

    Comments by TFA author about "on-die ECC":


    So on-die ECC is a bit of a mixed-blessing. To answer the big question in the gallery, on-die ECC is not a replacement for DIMM-wide ECC.

    On-die ECC is to improve the reliability of individual chips. Between the number of bits per chip getting quite high, and newer nodes getting successively harder to develop, the odds of a single-bit error is getting uncomfortably high. So on-die ECC is meant to counter that, by transparently dealing with single-bit errors.

    It's similar in concept to error correction on SSDs (NAND): the error rate is high enough that a modern TLC SSD without error correction would be unusable without it. Otherwise if your chips had to be perfect, these ultra-fine processes would never yield well enough to be usable.

    Consequently, DIMM-wide ECC will still be a thing. Which is why in the JEDEC diagram it shows an LRDIMM with 20 memory packages. That's 10 chips (2 ranks) per channel, with 5 chips per rank. The 5th chip is to provide ECC. Since the channel is narrower, you now need an extra memory chip for every 4 chips rather than every 8 like DDR4.

    ---

    And to quote SK Hynix

    "On-die error correction code (ECC)3 and error check and scrub (ECS), which were first to be adopted in DDR5, also allow for more reliable technology node scaling by correcting single bit errors internally. Therefore, it is expected to contribute to further cost reduction in the future. ECS records the DRAM defects and provides the error counts to the host, thereby increasing transparency and enhancing the reliability, availability, and serviceability (RAS) function of the server system."

    https://news.skhynix.com/why-ddr5-is-the-industrys-powerful-next-gen-memory/ [skhynix.com]

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by Rich on Thursday July 16 2020, @07:10AM (1 child)

      by Rich (945) on Thursday July 16 2020, @07:10AM (#1022321) Journal

      As I understand, the "rowhammer" class of attacks still is a threat on present-day RAM, so some sort of error correction is due anyway. Any insights on this with the DDR5 proposals?

      • (Score: 2) by takyon on Thursday July 16 2020, @09:21AM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday July 16 2020, @09:21AM (#1022340) Journal

        It isn't clear yet.

        From 2018:

        https://www.tomshardware.com/news/ecc-memory-not-safe-rowhammer,38144.html [tomshardware.com]

        Samsung is one of the manufacturers that have implemented TRR [synopsys.com] in their LPDDR4 and DDR4 RAM modules. JEDEC, the standards body developing the DDR specifications, has not yet made TRR part of the DDR specification (it doesn’t seem to be part of DDR5 [tomshardware.com] either), but the specification offers optional hardware support for TRR. The VUSec researchers also believe that TRR coupled with ECC would make it significantly more difficult for attackers to launch Rowhammer attackers against computer systems.

        If it's discussed in the newly released spec, well, they are charging $369 for access to that information.

        The same exact user is whining about a fictional Rowhammer3 on AnandTech and Phoronix comments. But it's not a bad bet to assume that more vulnerabilities will be found.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 4, Funny) by DannyB on Wednesday July 15 2020, @06:59PM (2 children)

    by DannyB (5839) Subscriber Badge on Wednesday July 15 2020, @06:59PM (#1022049) Journal

    we'll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.

    Just in the nick of time. This September, Java 15 raises the maximum heap size from 4 TB to 16 TB of memory.

    How do you even build a board with that much memory?

    This will create new and interesting challenges for us developers to use up all that memory.

    (but still 1 ms max GC pause time, if you have plenty of cpu cores)

    --
    Since nobody defrags SSDs anymore, they are more (or less?) prone to failure of their seek mechanisms.
    • (Score: 2) by takyon on Wednesday July 15 2020, @07:13PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday July 15 2020, @07:13PM (#1022053) Journal

      16x 1 TB DDR5 RDIMMs should do the trick, once they exist.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Wednesday July 15 2020, @08:20PM

      by Anonymous Coward on Wednesday July 15 2020, @08:20PM (#1022083)

      Still less than Emacs though, so they're good to go.

  • (Score: 1, Funny) by Anonymous Coward on Wednesday July 15 2020, @09:39PM (1 child)

    by Anonymous Coward on Wednesday July 15 2020, @09:39PM (#1022113)

    btw, whatever happened to rambus cocksuckers?

  • (Score: 3, Insightful) by shortscreen on Thursday July 16 2020, @12:37AM (2 children)

    by shortscreen (2252) on Thursday July 16 2020, @12:37AM (#1022170) Journal

    Overclockers will be sad that they can't crank up the memory voltage via software. That is, unless they get the inevitable special overclocker DIMMs with blue LEDs that also have voltage tweaking as an additional feature.

    Shady vendors can put slow chips on a DIMM and raise the voltage to pass it off as a higher speed grade.

    • (Score: 2) by takyon on Thursday July 16 2020, @02:21AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday July 16 2020, @02:21AM (#1022215) Journal

      There's already plans for DDR5-8400, which is way outside of the spec. Clearly some sort of overclocking will be possible.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Friday July 17 2020, @03:48PM

      by Anonymous Coward on Friday July 17 2020, @03:48PM (#1022933)

      Shady vendors can put slow chips on a DIMM and raise the voltage to pass it off as a higher speed grade.

      This is nothing new. So, you got cheap DDR4 at 2933 or 3200 on the desktop while laptops get cheap 2400 or maybe 2666.
      The desktop is "cheating" by running at 1.35V instead of 1.2V but it's faster, everyone knows it, it's warrantied and it costs about the same.
      You go buy cheap slower DDR 2400 for the desktop if you want to.

(1)