from the keeping-up-with-the-Joneses^W^W-Samsung dept.
Cadence & Micron DDR5 Update: 16 Gb Chips on Track for 2019
Earlier this year Cadence and Micron performed the industry's first public demonstration of next-generation DDR5 memory. At a TSMC event earlier this month the two companies provided some updates concerning development of the new memory technology. As it appears, the spec has not been finalized at JEDEC yet, but Micron still expects to start production of DDR5 memory chips in late 2019.
As noted back in May, the primary feature of DDR5 SDRAM is capacity of chips, not just a higher performance and a lower power consumption. DDR5 is expected to bring in I/O speeds of 4266 to 6400 MT/s, with a supply voltage drop to 1.1 V and an allowable fluctuation range of 3% (i.e., at ±0.033V). It is also expected to use two independent 32/40-bit channels per module (without/or with ECC). Furthermore, DDR5 will have an improved command bus efficiency (because the channels will have their own 7-bit Address (Add)/Command (Cmd) buses), better refresh schemes, and an increased bank group for additional performance. In fact, Cadence goes as far as saying that improved functionality of DDR5 will enable a 36% higher real-world bandwidth when compared to DDR4 even at 3200 MT/s (this claim will have to be put to a test) and once 4800 MT/s speed kicks in, the actual bandwidth will be 87% higher when compared to DDR4-3200. In the meantime, one of the most important features of DDR5 will be monolithic chip density beyond 16 Gb.
Leading DRAM makers already have monolithic DDR4 chips featuring a 16 Gb capacity, but those devices cannot offer extreme clocks or I/O speeds because of laws of physics. Therefore, companies like Micron have a lot of work to do in a bid to bring together high DRAM densities and performance in the DDR5 era. In particular, Micron is concerned about variable retention time, and other atomic level occurrences, once production technologies used for DRAM reach 10 – 12 nm. Meanwhile, the DDR5 Add/Cmd bus already features on-die termination to make signals cleaner and to improve stability at high data rates. Furthermore, high-end DDR5 DIMMs will have their own voltage regulators and PMICs. Long story short, while the DDR5 standard is tailored to wed performance and densities, there is still a lot of magic to be done by DRAM manufacturers.
Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Related Stories
JEDEC has announced that it expects to finalize the DDR5 standard by next year. It says that DDR5 will double bandwidth and density, and increase power efficiency, presumably by lowering the operating voltages again (perhaps to 1.1 V). Availability of DDR5 modules is expected by 2020:
You may have just upgraded your computer to use DDR4 recently or you may still be using DDR3, but in either case, nothing stays new forever. JEDEC, the organization in charge of defining new standards for computer memory, says that it will be demoing the next-generation DDR5 standard in June of this year and finalizing the standard sometime in 2018. DDR5 promises double the memory bandwidth and density of DDR4, and JEDEC says it will also be more power-efficient, though the organization didn't release any specific numbers or targets.
The DDR4 SDRAM specification was finalized in 2012, and DDR3 in 2007, so DDR5's arrival is to be expected (cue the Soylentils still using DDR2). One way to double the memory bandwidth of DDR5 is to double the DRAM prefetch to 16n, matching GDDR5X.
Graphics cards are beginning to ship with GDDR5X. Some graphics cards and Knights Landing Xeon Phi chips include High Bandwidth Memory (HBM). A third generation of HBM will offer increased memory bandwidth, density, and more than 8 dies in a stack. Samsung has also talked about a cheaper version of HBM for consumers with a lower total bandwidth. SPARC64 XIfx chips include Hybrid Memory Cube. GDDR6 SDRAM could raise per-pin bandwidth to 14 Gbps, from the 10-14 Gbps of GDDR5X, while lowering power consumption.
Cadence and Micron Demo DDR5-4400 IMC and Memory, Due in 2019
Cadence this week introduced the industry's first IP interface in silicon for the current provisional DDR5 specification developed by JEDEC. Cadence's IP and test chip [are] fabricated using TSMC's 7 nm process technology, and is designed to enable SoC developers to begin on their DDR5 memory subsystems now and get them to market in 2019-2020, depending on high-volume DDR5 availability. At a special event, Cadence teamed up with Micron to demonstrate their DDR5 DRAM subsystem. In the meantime, Micron has started to sample its preliminary DDR5 chips to interested parties.
Cadence's DDR5 memory controller and PHY achieve a 4400 MT/s data rate with CL42 using Micron's prototype 8 Gb DDR5 memory chips. Compared to DDR4 today, the supply voltage of DDR5 is dropped from 1.2 volts to 1.1 volts, with an allowable fluctuation range of only ±0.033 V. In this case, the specifications mean that an 8 Gb DDR5 DRAM chip can hit a considerably higher I/O speed than an 8 Gb commercial DDR4 IC today at a ~9% lower voltage. JEDEC plans that eventually the DDR5 interface will get to 6400 MT/s, but Cadence says that initial DDR5 memory ICs will support ~4400 MT/s data rates. This will be akin to DDR4 rising from DDR4-2133 at initial launch to DDR4-3200 today. Cadence's DDR5 demo video can be watched here.
DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond
We'll start with a brief look at capacity and density, as this is the most-straightforward change to the standard compared to DDR4. Designed to span several years (if not longer), DDR5 will allow for individual memory chips up to 64Gbit in density, which is 4x higher than DDR4's 16Gbit density maximum. Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we'll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.
[...] For DDR5, JEDEC is looking to start things off much more aggressively than usual for a DDR memory specification. Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4.
Of course, sly enthusiasts will note that DDR4 already goes above the official maximum of 3.2Gbps (sometimes well above), and it's likely that DDR5 will eventually go a similar route. The underlying goal, regardless of specific figures, is to double the amount of bandwidth available today from a single DIMM. So don't be too surprised if SK Hynix indeed hits their goal of DDR5-8400 later this decade.
[...] JEDEC is also using the introduction of the DDR5 memory standard to make a fairly important change to how voltage regulation works for DIMMs. In short, voltage regulation is being moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs. This means that DIMMs will now include an integrated voltage regulator, and this goes for everything from UDIMMs to LRDIMMs.
JEDEC is dubbing this "pay as you go" voltage regulation, and is aiming to improve/simplify a few different aspects of DDR5 with it. The most significant change is that by moving voltage regulation on to the DIMMs themselves, voltage regulation is no longer the responsibility of the motherboard. Motherboards in turn will no longer need to be built for the worst-case scenario – such as driving 16 massive LRDIMMs – simplifying motherboard design and reining in costs to a degree. Of course, the flip side of this argument is that it moves those costs over to the DIMM itself, but then system builders are at least only having to buy as much voltage regulation hardware as they have DIMMs, and hence the PAYGO philosophy.
"On-die ECC" is mentioned in the press release and slides. If you can figure out what that means, let us know.
See also: Micron Drives DDR5 Memory Adoption with Technology Enablement Program
Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More
DDR5 is Coming: First 64GB DDR5-4800 Modules from SK Hynix
DDR5 is the next stage of platform memory for use in the majority of major compute platforms. The specification (as released in July 2020) brings the main voltage down from 1.2 V to 1.1 V, increases the maximum silicon die density by a factor 4, doubles the maximum data rate, doubles the burst length, and doubles the number of bank groups. Simply put, the JEDEC DDR specifications allows for a 128 GB unbuffered module running at DDR5-6400. RDIMMs and LRDIMMs should be able to go much higher, power permitting.
[...] SK Hynix's announcement today is that they are ready to start shipping DDR5 ECC memory to module manufacturers – specifically 16 gigabit dies built on its 1Ynm process that support DDR5-4800 to DDR5-5600 at 1.1 volts. With the right packaging technology (such as 3D TSV), SK Hynix says that partners can build 256 GB LRDIMMs. Additional binning of the chips for better-than-JEDEC speeds will have to be done by the module manufacturers themselves. SK Hynix also appears to have its own modules, specifically 32GB and 64GB RDIMMs at DDR5-4800, and has previously promised to offer memory up to DDR5-8400.
[...] As part of the announcement, it was interesting to see Intel as one of the lead partners for these modules. Intel has committed to enabling DDR5 on its Sapphire Rapids Xeon processor platform, due for initial launch in late 2021/2022. AMD was not mentioned with the announcement, and neither were any Arm partners.
SK Hynix quotes that DDR5 is expected to be 10% of the global market in 2021, increasing to 43% in 2024. The intersection point for consumer platforms is somewhat blurred at this point, as we're probably only half-way through (or less than half) of the DDR4 cycle. Traditionally we expect a cost interception between old and new technology when they are equal in market share, however the additional costs in voltage regulation that DDR5 requires is likely to drive up module costs – scaling from standard power delivery on JEDEC modules up to a beefier solution on the overclocked modules. It should however make motherboards cheaper in that regard.
See also: Insights into DDR5 Sub-timings and Latencies
Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More
JEDEC Releases DDR5 Memory Specification
(Score: 2) by bzipitidoo on Tuesday October 23 2018, @06:57PM (6 children)
There are so many areas within a computer system that, any more, operate with a good deal of independence, it's hard to follow them all. Been a long time since the Central in CPU has meant God like central control of everything, when there was One Clock to Rule Them All.
Let's see, the subsystems are:
CPU
GPU
memory
bus (PCI-E)
USB
networking
HDD or SSD
audio
Of those, perhaps only audio is no big deal?
(Score: 4, Interesting) by takyon on Tuesday October 23 2018, @08:07PM (5 children)
The biggest deal for performance in recent memory has been the transition from HDDs to SSDs.
A more transient thing has been the usual industry collusion causing DRAM prices to effectively triple.
We may see something new get added to your list: a neuromorphic or tensor processing unit. Lots of smartphones have been including a variation on "AI" hardware in their SoCs, and I could see it getting added to desktop PCs. Perhaps integrated with the CPU.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by jasassin on Wednesday October 24 2018, @12:43AM (4 children)
I think this will continue to be the most important factor for everyday use in the foreseeable future. I don't have one, because I'm concerned about their reliability and longevity.
I'd love to hear positive and negative comments from long term owners of SSD's.
jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
(Score: 2) by bzipitidoo on Wednesday October 24 2018, @04:31AM (2 children)
In recent years, I've had more trouble with HDDs than ever. That AMD machine I mentioned is now on its 3rd HDD, the first 2 having failed prematurely. Meanwhile, every one of my older hard drives is still working, or was when I last used them.
The SSD in my newest machine is a year and a half old, and so far, it's been working perfectly. I even keep it pretty close to full. And yeah, I even have a swap partition on it. That used to be a very bad idea, one of the prime ways to send your SSD to an early grave. But SSDs are a lot more robust these days, and with wear leveling and their much higher lifetime limit on the number of write cycles they can take, swap no longer poses that problem. Nevertheless, I have plenty of RAM, so the swap is seldom used. One thing I use swap for is to slow everything down if memory runs short. Gives me time to notice there's a problem and take intelligent action before the process killer starts bumping off processes, or worse things happen. That worked fine with an HDD-- their slowness was an advantage in that scenario. But now, with an SSD, file I/O may be too fast.
Anyway, no matter which you use, you know what you're supposed to do: backup regularly. That does more than protect you from failing hardware. It also protects from human error.
(Score: 0) by Anonymous Coward on Friday October 26 2018, @07:47PM (1 child)
Seagate has been notorious for their failure rates. Toshiba/HGST are currently the best on most models, and WD falls inbetween them and Seagate, although closer to them in reliability.
I haven't had a serious failure on any of the latter three brands in 10-20 years, with 10+ year old drives still running with only a few stable bad sector patches on them. Seagate on the other hand I have had a complete drive failure on a 3TB and some drive electronics failures on the older 500GB-2TB drives utilizing the same or similar controller modules.
(Score: 2) by bzipitidoo on Friday October 26 2018, @10:06PM
Yeah, I've heard Seagate is the least reliable these days, but the two that failed on me were WD drives. First one, a 1T caviar green, failed in just 9 months. The next one was a caviar black, and it failed after 3 years. I was determined to get a different brand after that, but there's not a lot to choose from any more. Went with a Fujitsu, which is still working today. Meh, seems SSD is the way to go from now on.
(Score: 0) by Anonymous Coward on Wednesday October 24 2018, @09:37AM
SSD reliability has long since surpassed HDD reliability. Buy a name brand (samsung, corsair, western digital, intel perhaps if you have more money than brains) and keep an eye on the drive life SMART counter. If you are convinced that your workload is sufficiently high that you will wear out a SSD before you upgrade the machine buy Optane which has nearly unlimited write lifetime. hint - this is unlikely, even with swap on the hdd and half a dozen windows updaters running in the background you are unlikely to use more than a TB per year
(Score: 2) by DannyB on Tuesday October 23 2018, @10:03PM
In the Java world there are two new GCs for multi-Terabyte heaps (yes, you read that right) to achieve 10 ms GC pause times. Red Hat's Shenandoah, and Oracle's ZGC. Both are in Open JDK. (GPL + classpath exception) There is also a commercial GC for big heaps: Azul Systems' Zing, that's been around a while, known to be good for hundreds of gigabyte heaps.
I have heard of such systems having up to 768 cores.
At long last it will be common to have enough memory, and low enough GC pause time for a decent Java Hello Whirrled program.
Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!