from the breaking-the-powers-of-two-shackles dept.
Need a New Year's resolution? How about stop paying for memory you don't need:
We're all used to dealing with system memory in neat factors of eight. As capacity goes up, it follows a predictable binary scale doubling from 8GB to 16GB to 32GB and so on. But with the introduction of DDR5 and non-binary memory in the datacenter, all of that's changing.
Instead of jumping straight from a 32GB DIMM to a 64GB one, DDR5, for the first time, allows for half steps in memory density. You can now have DIMMs with 24GB, 48GB, 96GB, or more in capacity.
The added flexibility offered by these DIMMs could end up driving down system costs, as customers are no longer forced to buy more memory than they need just to keep their workloads happy.
Non-binary memory isn't actually all that special. What makes non-binary memory different from standard DDR5 comes down to the chips used to make the DIMMs.
Instead of the 16Gb — that's gigabit — modules found on most DDR5 memory today, non-binary DIMMs use 24Gb DRAM chips. Take 20 of these chips and bake them onto a DIMM, and you're left with 48GB of usable memory after you take into account ECC and metadata storage.
[...] To date, all of the major memory vendors, including Samsung, SK-Hynix, and Micron, have announced 24Gb modules for use in non-binary DIMMs.
[...] Of course non-binary memory isn't the only way to get around the memory-core ratio problem.
"Technologies such as non-binary capacities are helpful, but so is the move to CXL memory — shared system memory — and on-chip high-bandwidth memory," Lam said.
[...] However, they are less an alternative to non-binary memory and more of a complement to them. In fact, Astera Lab's expansion modules should work just fine with 48GB, 96GB, or larger non-binary DIMMs.
(Score: 2) by EJ on Wednesday January 04, @01:53AM
Linus broke this when making his own DIMMs.
(Score: 5, Funny) by Rosco P. Coltrane on Wednesday January 04, @03:38AM (2 children)
Memory modules that want to be called they/them?
(Score: 3, Touché) by Anonymous Coward on Wednesday January 04, @05:16AM (1 child)
It's almost as if the author engineered the title for this kind of response.
(Score: 5, Informative) by driverless on Wednesday January 04, @07:18AM
Yup, definitely, particularly since the correct description is "non-power-of-two", not "non-binary".
(Score: 4, Insightful) by sjames on Wednesday January 04, @03:44AM (3 children)
Everything old is new again. Back in the before time, if you needed 24GB, you'd install 3 8GB sticks or a 16 and an 8.
(Score: 5, Funny) by driverless on Wednesday January 04, @07:21AM (1 child)
I once inherited an HP-serviced desktop and was a bit puzzled about its odd memory size, 12GB or some other such odd value. Opened it up to look inside, swapped the RAM modules into the correct slots and suddenly I had 32GB or similar. Kudos to HP's engineering guys that they managed to design a motherboard such that HPs highly-trained service monkeys could put the memory modules into the wrong slots and it'd still function, albeit at limited capacity.
(Score: 4, Interesting) by sjames on Wednesday January 04, @04:12PM
HP makes some really weird hardware anyway. Often quite robust but way over-complicated and with a few really notable screw-ups in firmware. I wouldn't be surprised if the hardware didn't care which slot was which but the BIOS had some odd limitation in enumerating memory.
(Score: 2) by bussdriver on Thursday January 05, @12:49AM
DDR5 has two channels per DIMM just like you could previously install in pairs and interleave for speed -- DDR5 does that onboard with a single DIMM.
DDR5 using a pair of DIMMs is really 4 pairs of DIMMS. This is why you run into issues trying to pair 4 DIMMS because it's actually 8 at that point and the signal noise becomes a problem on current motherboards (what I read causes slow down in expected performance.)
So my 1st thought is that they simply added 1 more channel and gave up the speed but then to handle things in half sizes it shouldn't be a leap to leverage that to get 50% bumps in size while keeping both channels. example: 16 + 8 = 24.
(Score: 5, Insightful) by Rosco P. Coltrane on Wednesday January 04, @04:14AM (5 children)
When you buy a new machine, it's always a good idea to max it out from the get-go, because you know at some point stuff that runs well today will inevitably turn into bloatware years later. What's the point of nitpicking between 24G and 32G? Unless you're certain whatever you run will never run out of memory, you need to deploy thousands of the same machines and small savings add up, just go with 32G.
(Score: 3, Insightful) by acid andy on Wednesday January 04, @04:39AM
I was going to say something similar. An incremental upgrade is probably false economy because even though you're spending a bit less on that occasion, you'll just end up having to buy upgrades more frequently (There's a reason why implementations of dynamic arrays usually double the capacity each time more memory needs to be allocated). It's worse if you've filled all the RAM slots and have to remove some of your existing RAM to make space for the larger capacity DIMMs next time.
Master of the science of the art of the science of art.
(Score: 3, Interesting) by takyon on Wednesday January 04, @07:59AM (2 children)
The point is to introduce modules based on 24 Gb dies instead of 16 Gb dies, as a stopgap because 32 Gb has been harder to reach than the industry expected. So 48 GB modules instead of 32 GB are also a possibility. It's just that 24 GB replaces 16 GB with the same number of chips.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Interesting) by sjames on Wednesday January 04, @04:19PM (1 child)
Either that or a new process had a higher than desired defect rate and they're disabling parts that fail in testing.
(Score: 2) by takyon on Wednesday January 04, @06:08PM
They're based on 24 Gb chips. This has been coming in slow motion since July 2021 [soylentnews.org].
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 5, Informative) by richtopia on Wednesday January 04, @03:38PM
The article is describing servers as the main use-case. When you have a well-known application and memory load you can spec the system accordingly. The savings could add up for the sizes quoted in the article:
(Score: 5, Interesting) by mhajicek on Wednesday January 04, @04:39AM (1 child)
I figured this would be talking about four level memory cells, not just quantities in other than powers of two.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 2) by RS3 on Wednesday January 04, @05:17AM
I'm still waiting for bubble memory. Or memristors.
(Score: 1) by pTamok on Wednesday January 04, @09:28AM (2 children)
I was expecting it to be at least ternary or tri-state memory, much like multi-level flash - so instead of storing bits as presence or absence of charge, you store them as discrete/discernable levels of charge.
I was disappointed.
(Score: 3, Funny) by krishnoid on Wednesday January 04, @06:56PM (1 child)
It should be available in the future. You might run into nomenclatural [schlockmercenary.com] problems, though.
(Score: 1) by pTamok on Thursday January 05, @11:35AM
Upvote for mention of Schlock Mercenary.