(Source: A Register article mentioned the new XEON having "MRDIMM" capability. "What is MRDIMM?")
Keeping up with the latest in hardware is hard, and in the early turn of the century there was a new technology in every magazine on the rack.
Today, we've hit some fatigue and just don't keep up as much. Right? :-) Anyway, while most of us have heard of Dell's (and Lenovo's) proposal for CAMM modules to replace the multi-stick SO-DIMM sockets, servers are getting a new standard, too: M(C)RDIMMs -- Multiplexed (Combined) Rank Dual Inline Memory Modules.
Some outtakes from product briefs, such as Micron's,
- DDR5 physical and electrical standards
- up to 256GB modules
- increased <everything that makes it good>
By implementing DDR5 physical and electrical standards, MRDIMM technology delivers a memory advancement that allows scaling of both bandwidth and capacity per core to future-proof compute systems and meets the expanding demands of data center workloads. MRDIMMs provide the following advantages over RDIMMs: 2
- Up to 39% increase in effective memory bandwidth2
- Greater than 15% better bus efficiency2
- Up to 40% latency improvements compared to RDIMMs3
MRDIMMs support a wide capacity range from 32GB to 256GB in standard and tall form factors (TFF), which are suitable for high-performance 1U and 2U servers. The improved thermal design of TFF modules reduces DRAM temperatures by up to 20 degrees Celsius at the same power and airflow, [...] enabling more efficient cooling capabilities in data centers and optimizing total system task energy for memory-intensive workloads. Micron's industry-leading memory design and process technology using 32Gb DRAM die enables 256GB TFF MRDIMMs to have the same power envelope as 128GB TFF MRDIMMs using 16Gb die. A 256GB TFF MRDIMM provides a 35% improvement in performance over similar-capacity TSV RDIMMs at the maximum data rate.
And SK Hynix has their own variety, touting bandwidth of 8.8GB/s (ove DDR5's 6.4GB/s).
New to 2024, shipping H2, it seems. Keep up with the times! Grow your RAM modules. Taller (literally).
(Score: 3, Interesting) by DrkShadow on Thursday September 26, @10:48PM (2 children)
What is multi-rank memory? I seem to remember (EDO? DDR? ddr2?) DIMMS back in the day needed 2 or 4 or 8-rank, and the rank had to match between the dimms or no-boot.
Why does it matter, then, that these chips multi-plex the rank?
--
Also, they say that it supports at most one DIMM per channel. If you have two DIMMS per channel before, doesn't that get you the same? or only wider data width? (wouldn't that double bandwidth, as opposed to the 30% increase of MRDIMMs?)
(Score: 4, Informative) by drussell on Friday September 27, @02:18PM (1 child)
A "rank" is one memory-bus-width worth of chips on the data bus and address lines. Some modules have more than one set of chips connected to the data and address selection, row/column, etc. lines that are only separated by their individual chip-select lines to each rank. The memory controller has to be able to drive all of the chips connected in parallel as normal memory doesn't go tri-state when the chip select line is deselected.
The term "rank" was coined back in the days of "single sided" vs "double sided" SIMMs, etc. to distinguish between ones that were actually more than one chip connected in parallel and those that were just lower bit-count per chip that might be mounted on both sides of the board, etc. (There could also be high bit-count, wide memory chips mounted on one side of a board that were actually two ranks, so double-sided vs single-sided wasn't really an accurate way to describe them.)
For example, on a 32-bit wide, 72-pin SIMM, you could have 8 x 4-bit wide chips on one side of a board, and it would be a single rank. 8 x 4-bit wide chips, mounted half and half on each side of the board would be dual rank. 8 x 8-bit wide chips on one side of a board would also be dual rank, even though it is all on one side of the board. Using 4 x 16-bit-wide chips can even make a dual rank SIMM that looks like a single rank, high density SIMM, you have to actually look at the pin count and understand the pinouts and how they're selected to know for sure. In theory, you could even have a single double-tall SIMM that had 32, single-bit-wide chips, mounted 16 on each side of the board, and it would still just be a single rank. etc. etc.
On 8-bit wide, 30-pin SIMMs, there was often eight individual 1-bit-wide chips, so the second rank would be the second side of the board with another 8-bits worth of chips. This would basically be the same as having two single sided SIMMs in separate sockets. Since most machines in that era needed 32-bits, you had to fill four sockets at a time, so if you tried to put, say two single-sided and two double-sided SIMMs in or something, it's not going to work, because there is then 16-bits worth missing for half the address space. With a capable memory controller, that could potentially be a valid configuration on a 286 or 386SX, though...
Obviously, newer memory modules are typically 64-bit wide, but the same principle applies, how ever many 64-bit-widths-worth of chips you have determines how many ranks there are, just selected between by their chip-select lines (connected somehow to something the memory controller can address each rank by.)
see: https://en.wikipedia.org/wiki/Memory_rank [wikipedia.org]
(Score: 3, Informative) by drussell on Friday September 27, @02:20PM
OOPS...
I meant to say, "TWO SETS of 8 x 4-bit wide chips (16 chips total), mounted half and half on each side of the board would be dual rank."
(Score: 3, Touché) by Snotnose on Thursday September 26, @11:20PM (1 child)
who mentally reads this as Mr. Dimm.
Bad decisions, great stories
(Score: 2) by coolgopher on Thursday September 26, @11:40PM
I'll be more worried when it reads as Mr. O'Dimm.