JEDEC has announced that it expects to finalize the DDR5 standard by next year. It says that DDR5 will double bandwidth and density, and increase power efficiency, presumably by lowering the operating voltages again (perhaps to 1.1 V). Availability of DDR5 modules is expected by 2020:
You may have just upgraded your computer to use DDR4 recently or you may still be using DDR3, but in either case, nothing stays new forever. JEDEC, the organization in charge of defining new standards for computer memory, says that it will be demoing the next-generation DDR5 standard in June of this year and finalizing the standard sometime in 2018. DDR5 promises double the memory bandwidth and density of DDR4, and JEDEC says it will also be more power-efficient, though the organization didn't release any specific numbers or targets.
The DDR4 SDRAM specification was finalized in 2012, and DDR3 in 2007, so DDR5's arrival is to be expected (cue the Soylentils still using DDR2). One way to double the memory bandwidth of DDR5 is to double the DRAM prefetch to 16n, matching GDDR5X.
Graphics cards are beginning to ship with GDDR5X. Some graphics cards and Knights Landing Xeon Phi chips include High Bandwidth Memory (HBM). A third generation of HBM will offer increased memory bandwidth, density, and more than 8 dies in a stack. Samsung has also talked about a cheaper version of HBM for consumers with a lower total bandwidth. SPARC64 XIfx chips include Hybrid Memory Cube. GDDR6 SDRAM could raise per-pin bandwidth to 14 Gbps, from the 10-14 Gbps of GDDR5X, while lowering power consumption.
Related Stories
JEDEC has finalized the GDDR5X SGRAM specification:
The new technology is designed to improve bandwidth available to high-performance graphics processing units without fundamentally changing the memory architecture of graphics cards or memory technology itself, similar to other generations of GDDR, although these new specifications are arguably pushing the phyiscal[sic] limits of the technology and hardware in its current form. The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged (and hence why it is not being called GDDR6).
[...] The key improvement of the GDDR5X standard compared to the predecessor is its all-new 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. By contrast, the GDDR5 technology features 8n prefetch architecture and can read or write up to 256 bit (32 Bytes) of data per cycle. Doubled prefetch and increased data transfer rates are expected to double effective memory bandwidth of GDDR5X sub-systems. However, actual performance of graphics cards will depend not just on DRAM architecture and frequencies, but also on memory controllers and applications. Therefore, we will need to test actual hardware to find out actual real-world benefits of the new memory.
What purpose does GDDR5X serve if superior 1st and 2nd generation High Bandwidth Memory (HBM) are around? GDDR5X memory will be cheaper than HBM and its use is more of an evolutionary than revolutionary change from existing GDDR5-based hardware.
SK Hynix is almost ready to produce GDDR6 memory with higher than expected per-pin bandwidth:
In a surprising move, SK Hynix has announced its first memory chips based on the yet-unpublished GDDR6 standard. The new DRAM devices for video cards have capacity of 8 Gb and run at 16 Gbps per pin data rate, which is significantly higher than both standard GDDR5 and Micron's unique GDDR5X format. SK Hynix plans to produce its GDDR6 ICs in volume by early 2018.
GDDR5 memory has been used for top-of-the-range video cards for over seven years, since summer 2008 to present. Throughout its active lifespan, GDDR5 increased its data rate by over two times, from 3.6 Gbps to 9 Gbps, whereas its per chip capacities increased by 16 times from 512 Mb to 8 Gb. In fact, numerous high-end graphics cards, such as NVIDIA's GeForce GTX 1060 and 1070, still rely on the GDDR5 technology, which is not going anywhere even after the launch of Micron's GDDR5X with up to 12 Gbps data rate per pin in 2016. As it appears, GDDR6 will be used for high-end graphics cards starting in 2018, just two years after the introduction of GDDR5X.
Previously: Samsung Announces Mass Production of HBM2 DRAM
DDR5 Standard to be Finalized by JEDEC in 2018
Samsung's second generation ("1y-nm") 8 Gb DDR4 DRAM dies are being mass produced:
Samsung late on Wednesday said that it had initiated mass production of DDR4 memory chips using its second generation '10 nm-class' fabrication process. The new manufacturing technology shrinks die size of the new DRAM chips and improves their performance as well as energy efficiency. To do that, the process uses new circuit designs featuring air spacers (for the first time in DRAM industry). The new DRAM ICs (integrated circuits) can operate at 3600 Mbit/s per pin data rate (DDR4-3600) at standard DDR4 voltages and have been validated with major CPU manufacturers already.
[...] Samsung's new DDR4 chip produced using the company's 1y nm fabrication process has an 8-gigabit capacity and supports 3600 MT/s data transfer rate at 1.2 V. The new D-die DRAM runs 12.5% faster than its direct predecessor (known as Samsung C-die, rated for 3200 MT/s) and is claimed to be up to 15% more energy efficient as well. In addition, the latest 8Gb DDR4 ICs use a new in-cell data sensing system that offers a more accurate determination of the data stored in each cell and which helps to increase the level of integration (i.e., make cells smaller) and therefore shrink die size.
Samsung says that the new 8Gb DDR4 chips feature an "approximate 30% productivity gain" when compared to similar chips made using the 1x nm manufacturing tech.
UPDATE 12/21: Samsung clarified that productivity gain means increase in the number of chips per wafer. Since capacity of Samsung's C-die and D-die is the same, the increase in the number of dies equals the increase in the number of bits per wafer. Therefore, the key takeaway from the announcement is that the 1y nm technology and the new in-cell data sensing system enable Samsung to shrink die size and fit more DRAM dies on a single 300-mm wafer. Meanwhile, the overall 30% productivity gain results in lower per-die costs at the same yield and cycle time (this does not mean that the IC costs are 30% lower though) and increases DRAM bit output.
The in-cell data sensing system and air spacers will be used by Samsung in other upcoming types of DRAM, including DDR5, LPDDR5, High Bandwidth Memory 3.0, and GDDR6.
Also at Tom's Hardware.
Previously: Samsung Announces "10nm-Class" 8 Gb DRAM Chips
Related: Samsung Announces 12Gb LPDDR4 DRAM, Could Enable Smartphones With 6 GB of RAM
Samsung Announces 8 GB DRAM Package for Mobile Devices
Samsung's 10nm Chips in Mass Production, "6nm" on the Roadmap
Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks
IC Insights Predicts Additional 40% Increase in DRAM Prices
Samsung has announced the mass production of 16 Gb GDDR6 SDRAM chips with a higher-than-expected pin speed. The chips could see use in upcoming graphics cards that are not equipped with High Bandwidth Memory:
Samsung has beaten SK Hynix and Micron to be the first to mass produce GDDR6 memory chips. Samsung's 16Gb (2GB) chips are fabricated on a 10nm process and run at 1.35V. The new chips have a whopping 18Gb/s pin speed and will be able to reach a transfer rate of 72GB/s. Samsung's current 8Gb (1GB) GDDR5 memory chips, besides having half the density, work at 1.55V with up to 9Gb/s pin speeds. In a pre-CES 2018 press release, Samsung briefly mentioned the impending release of these chips. However, the speed on release is significantly faster than the earlier stated 16Gb/s pin speed and 64GB/s transfer rate.
18 Gbps exceeds what the JEDEC standard calls for.
Also at Engadget and Wccftech.
Related: GDDR5X Standard Finalized by JEDEC
DDR5 Standard to be Finalized by JEDEC in 2018
SK Hynix to Begin Shipping GDDR6 Memory in Early 2018
Samsung's Second Generation 10nm-Class DRAM in Production
Cadence and Micron Demo DDR5-4400 IMC and Memory, Due in 2019
Cadence this week introduced the industry's first IP interface in silicon for the current provisional DDR5 specification developed by JEDEC. Cadence's IP and test chip [are] fabricated using TSMC's 7 nm process technology, and is designed to enable SoC developers to begin on their DDR5 memory subsystems now and get them to market in 2019-2020, depending on high-volume DDR5 availability. At a special event, Cadence teamed up with Micron to demonstrate their DDR5 DRAM subsystem. In the meantime, Micron has started to sample its preliminary DDR5 chips to interested parties.
Cadence's DDR5 memory controller and PHY achieve a 4400 MT/s data rate with CL42 using Micron's prototype 8 Gb DDR5 memory chips. Compared to DDR4 today, the supply voltage of DDR5 is dropped from 1.2 volts to 1.1 volts, with an allowable fluctuation range of only ±0.033 V. In this case, the specifications mean that an 8 Gb DDR5 DRAM chip can hit a considerably higher I/O speed than an 8 Gb commercial DDR4 IC today at a ~9% lower voltage. JEDEC plans that eventually the DDR5 interface will get to 6400 MT/s, but Cadence says that initial DDR5 memory ICs will support ~4400 MT/s data rates. This will be akin to DDR4 rising from DDR4-2133 at initial launch to DDR4-3200 today. Cadence's DDR5 demo video can be watched here.
Cadence & Micron DDR5 Update: 16 Gb Chips on Track for 2019
Earlier this year Cadence and Micron performed the industry's first public demonstration of next-generation DDR5 memory. At a TSMC event earlier this month the two companies provided some updates concerning development of the new memory technology. As it appears, the spec has not been finalized at JEDEC yet, but Micron still expects to start production of DDR5 memory chips in late 2019.
As noted back in May, the primary feature of DDR5 SDRAM is capacity of chips, not just a higher performance and a lower power consumption. DDR5 is expected to bring in I/O speeds of 4266 to 6400 MT/s, with a supply voltage drop to 1.1 V and an allowable fluctuation range of 3% (i.e., at ±0.033V). It is also expected to use two independent 32/40-bit channels per module (without/or with ECC). Furthermore, DDR5 will have an improved command bus efficiency (because the channels will have their own 7-bit Address (Add)/Command (Cmd) buses), better refresh schemes, and an increased bank group for additional performance. In fact, Cadence goes as far as saying that improved functionality of DDR5 will enable a 36% higher real-world bandwidth when compared to DDR4 even at 3200 MT/s (this claim will have to be put to a test) and once 4800 MT/s speed kicks in, the actual bandwidth will be 87% higher when compared to DDR4-3200. In the meantime, one of the most important features of DDR5 will be monolithic chip density beyond 16 Gb.
Leading DRAM makers already have monolithic DDR4 chips featuring a 16 Gb capacity, but those devices cannot offer extreme clocks or I/O speeds because of laws of physics. Therefore, companies like Micron have a lot of work to do in a bid to bring together high DRAM densities and performance in the DDR5 era. In particular, Micron is concerned about variable retention time, and other atomic level occurrences, once production technologies used for DRAM reach 10 – 12 nm. Meanwhile, the DDR5 Add/Cmd bus already features on-die termination to make signals cleaner and to improve stability at high data rates. Furthermore, high-end DDR5 DIMMs will have their own voltage regulators and PMICs. Long story short, while the DDR5 standard is tailored to wed performance and densities, there is still a lot of magic to be done by DRAM manufacturers.
Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond
We'll start with a brief look at capacity and density, as this is the most-straightforward change to the standard compared to DDR4. Designed to span several years (if not longer), DDR5 will allow for individual memory chips up to 64Gbit in density, which is 4x higher than DDR4's 16Gbit density maximum. Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we'll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.
[...] For DDR5, JEDEC is looking to start things off much more aggressively than usual for a DDR memory specification. Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4.
Of course, sly enthusiasts will note that DDR4 already goes above the official maximum of 3.2Gbps (sometimes well above), and it's likely that DDR5 will eventually go a similar route. The underlying goal, regardless of specific figures, is to double the amount of bandwidth available today from a single DIMM. So don't be too surprised if SK Hynix indeed hits their goal of DDR5-8400 later this decade.
[...] JEDEC is also using the introduction of the DDR5 memory standard to make a fairly important change to how voltage regulation works for DIMMs. In short, voltage regulation is being moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs. This means that DIMMs will now include an integrated voltage regulator, and this goes for everything from UDIMMs to LRDIMMs.
JEDEC is dubbing this "pay as you go" voltage regulation, and is aiming to improve/simplify a few different aspects of DDR5 with it. The most significant change is that by moving voltage regulation on to the DIMMs themselves, voltage regulation is no longer the responsibility of the motherboard. Motherboards in turn will no longer need to be built for the worst-case scenario – such as driving 16 massive LRDIMMs – simplifying motherboard design and reining in costs to a degree. Of course, the flip side of this argument is that it moves those costs over to the DIMM itself, but then system builders are at least only having to buy as much voltage regulation hardware as they have DIMMs, and hence the PAYGO philosophy.
"On-die ECC" is mentioned in the press release and slides. If you can figure out what that means, let us know.
See also: Micron Drives DDR5 Memory Adoption with Technology Enablement Program
Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More
DDR5 is Coming: First 64GB DDR5-4800 Modules from SK Hynix
DDR5 is the next stage of platform memory for use in the majority of major compute platforms. The specification (as released in July 2020) brings the main voltage down from 1.2 V to 1.1 V, increases the maximum silicon die density by a factor 4, doubles the maximum data rate, doubles the burst length, and doubles the number of bank groups. Simply put, the JEDEC DDR specifications allows for a 128 GB unbuffered module running at DDR5-6400. RDIMMs and LRDIMMs should be able to go much higher, power permitting.
[...] SK Hynix's announcement today is that they are ready to start shipping DDR5 ECC memory to module manufacturers – specifically 16 gigabit dies built on its 1Ynm process that support DDR5-4800 to DDR5-5600 at 1.1 volts. With the right packaging technology (such as 3D TSV), SK Hynix says that partners can build 256 GB LRDIMMs. Additional binning of the chips for better-than-JEDEC speeds will have to be done by the module manufacturers themselves. SK Hynix also appears to have its own modules, specifically 32GB and 64GB RDIMMs at DDR5-4800, and has previously promised to offer memory up to DDR5-8400.
[...] As part of the announcement, it was interesting to see Intel as one of the lead partners for these modules. Intel has committed to enabling DDR5 on its Sapphire Rapids Xeon processor platform, due for initial launch in late 2021/2022. AMD was not mentioned with the announcement, and neither were any Arm partners.
SK Hynix quotes that DDR5 is expected to be 10% of the global market in 2021, increasing to 43% in 2024. The intersection point for consumer platforms is somewhat blurred at this point, as we're probably only half-way through (or less than half) of the DDR4 cycle. Traditionally we expect a cost interception between old and new technology when they are equal in market share, however the additional costs in voltage regulation that DDR5 requires is likely to drive up module costs – scaling from standard power delivery on JEDEC modules up to a beefier solution on the overclocked modules. It should however make motherboards cheaper in that regard.
See also: Insights into DDR5 Sub-timings and Latencies
Previously: DDR5 Standard to be Finalized by JEDEC in 2018
DDR5-4400 Test Chip Demonstrated
Cadence and Micron Plan Production of 16 Gb DDR5 Chips in 2019
SK Hynix Announces Plans for DDR5-8400 Memory, and More
JEDEC Releases DDR5 Memory Specification
(Score: 0) by Anonymous Coward on Monday April 03 2017, @01:00AM (1 child)
I still use DDR2.
(Score: 1) by Scruffy Beard 2 on Monday April 03 2017, @02:14AM
I have some PC-100 collecting warm dust. (But yes, using DDR2 to post this.)
(Score: 1, Funny) by Anonymous Coward on Monday April 03 2017, @02:31AM (1 child)
The big thing to look for is whether RGBU (red-green-blue-ultraviolet) makes it into the new standard. Having to choose between RGB and ad hoc RGBU is no choice at all. If that situation persists in DDR5 it will mean a lost generation for computing.
(Score: 3, Insightful) by takyon on Monday April 03 2017, @02:42AM
The RGBU LEDs should be able to pulse fast enough to communicate across the room (with the CIA). Also, don't forget the Twitch integration.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Hairyfeet on Monday April 03 2017, @02:39AM (7 children)
just as they do not care about the latest CPUs because the simple fact is once we switched from the Mhz wars to Core wars PCs quickly so powerful most users aren't even using half of what they have so see no reason to replace what they have which is why guys like me moved into other jobs like HTPCs and home theater installs as PC sales? Have slowed to practically a crawl.
I have to say it ended up affecting me as well, I love gaming and during the MHz wars built a new PC every other year (with a CPU or GPU upgrade in the off year) and now? My 4 year old octocore with 16gb of RAM spends more time twiddling its electron thumbs waiting for me to come up with new tasks for it to do than I can come up with work for the thing, and that is with me recording gameplay, editing videos, and even doing multitrack audio DSP renders. My previous PC is a Phenom II X4 and despite the age still purrs like a kitten and plays the wife's World Of Warships at 60fps.
And I'm what would be considered a "hardcore" user, for regular users that are just doing normal office tasks and running basic programs? Well I have many customers that are still quite happy with their C2D and Athlon X2 laptops despite the age and I can see why as my 6 year old AMD netbook despite the age is perfect for service calls, looking up parts, etc. Users simply cannot come up with enough work to make their DDR 3 or even DDR 2 system obsolete, so I have a feeling DDR 5 will take awhile for the majority are using it.
ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
(Score: 2) by takyon on Monday April 03 2017, @03:10AM (5 children)
This certainly matters since it will have effects on enterprise.
Personally, I think capacity/density matters more than speed for the home user. And the home user can see clear benefits from having lots of RAM. I'm using a 2 GB machine right now and wishing it had 4 or 8. If you get up to 32 GB or more, you can do more stuff, handle the crap Web 4.0 throws at you, and if you have way too much you could make a ramdrive. That's not apparent to the home user, but maybe operating systems could create ramdisks automatically out of spare RAM.
Home users will eventually get DDR5, just as they are starting to get DDR4 in their newly bought systems. That's unless another technology like HBM overtakes DRAM DIMMs. I don't think that will happen.
JEDEC has not given us enough info to let us know why density is doubling. In fact, it doesn't make much sense at all. Samsung, Micron, and the others should be handling the density, while JEDEC specifies the speeds/timing, right? Maybe JEDEC is going to add support for 2-layer 3D stacking to DDR5, like with 3D/vertical NAND and High Bandwidth Memory (another JEDEC standard, and each version specifies the maximum height of stacks). They could do this while keeping the DIMM memory module form factor intact.
Or maybe the DDR5 standard will just double the maximum capacity per module, and it is just bad reporting and ambiguous language in the JEDEC release.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Scruffy Beard 2 on Monday April 03 2017, @04:30AM (2 children)
I think "density" when it comes to memory modules refers to how much capacity is allowed in each module.
DDR - 1GB
DDR2 - 2GB
--- not sure of the limits of the newer ones.
(Score: 2) by takyon on Monday April 03 2017, @05:07AM
Seems to be 16 GB for DDR3 [wikipedia.org] and 64 GB for DDR4 [wikipedia.org]... but I think manufacturers have exceeded those limits for both DDR3 and DDR4 [thememoryguy.com]. I guess you could call that non-JEDEC standard.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by shortscreen on Monday April 03 2017, @05:15AM
in other words, the number of address lines present in the DIMM slot
(Score: 2) by Hairyfeet on Tuesday April 04 2017, @08:27AM (1 child)
Except machines for the better part of a decade can hold 8Gb of RAM and for most users? That is frankly overkill. Hell the Q6600 I use for the main shop PC has 8Gb of RAM and that unit was literally a throw away from the local cable office because the GPU went out, 2Gb DDR 2 and 4gb DDR 3 sticks are dirt cheap, even 8gb DDR 3 chips are only $50 a stick so maxxing out the RAM in an older system? Really not expensive.
1
Sure Enterprise can use it, I never said there was NOBODY that would use it. You can sell the Enterprise 64 core chips that cost a couple of grand and $15k 4Tb SSDs and they'll snatch them up and ask for more because all they care about is iOPs and when they are handling millions of transactions a day? Throwing 20k at a box is really no big deal.
But that has nothing to do with mainstream and lets face it, its the mainstream companies want. The Enterprise market has been tightening its belt for years, you just don't see the mega corps just throwing money away on IT like you did back in the early 00s. Now its all about doing more with less, offshoring, and virtualization that lets them do the job of what would have been a dozen new units on a single box so enterprise sales alone? Is not gonna drive the industry. Why do you think GPUs have been taking huge leaps in design and CPUs have not? Or why the focus of the GPU industry is NOT on the top of the line units but the crucial $100-$250 dollar market? Because THAT is where the mainstream customers are and while they haven't been replacing their CPUs they have been swapping GPUs to play that hot new game!
So I stand by my statement, we will be looking at several years before DDR 5 becomes the RAM on the majority of systems and I bet DDR 4 will simply go nowhere, like GDDR 4 many will end up skipping it completely and waiting until their DDR 2 and DDR 3 systems die and then going with DDR 5. I see it out in the field all the time, systems being brought in to clean with 8gb of RAM and quad cores and the users are in no hurry to get new hardware, especially after I show them how fast an SSD OS drive makes even a C2Q feel. They simply see no point in shelling out several hundred on a new system when the one they have does everything they ask of it.
ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
(Score: 2) by takyon on Tuesday April 04 2017, @05:37PM
Please use GB and Gb correctly. 8 Gb = 1 GB.
I could definitely use more RAM, and as I said, there's options for when you have "too much". Even cheaper laptops are coming with 12+ GB of RAM (here's 12 GB at $330 [slickdeals.net], and this refurb has 16 GB and high specs for $700 [slickdeals.net]). Although the HDD to SSD transition is going to be more important for most users.
I will note that the cutting edge DDR4... just isn't expensive. The DRAM market has had oversupply for some time due to the decline in the PC market. Obviously, getting a new motherboard or processor is much more expensive, but if you happen to have done that, switching to DDR4 is not hard on the wallet. Some new desktops or laptops are in the $300 range and come with DDR4.
Some [anandtech.com] are predicting that memory modules will be replaced by HBM on package. Although HBM is currently more expensive, the smaller profile is well-suited for Ultrabooks or Chromebooks, even if it is not user-replaceable.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Monday April 03 2017, @07:26PM
Moving up to those newer processors saddles you with a ton of DRM, not a huge deal for the regular users, but if you don't REALLY REALLY need the extra CPU horsepower, or the power savings, why not stick with your older hardware, that you are PRETTY SURE can't be remotely accessed without OS level exploits, rather than the new ones that very well might have those exploits baked into the firmware.
(Score: 2) by kaszz on Monday April 03 2017, @04:05PM
If they increase the minimum DRAM prefetch to 16n. Then there will be a lot of transfer waits before you can reach a specific position. This will most likely wreck memory accesses that are really random (like synthesis of configurable gate arrays). Higher speeds with lower voltage will lower the signal-to-noise ratio margin.
Modern "DRAM" is more like a post office where you send your requests and hopefully get a response back when you need them.