Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Thursday May 31 2018, @03:28PM   Printer-friendly
from the I'm-still-trying-to-master-Donkey-Kong dept.

Intel has announced 3D XPoint DIMMs ranging from 128 GB to 512 GB per module:

Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

Also at Tom's Hardware and Ars Technica.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday May 31 2018, @11:46PM

    by Anonymous Coward on Thursday May 31 2018, @11:46PM (#686992)

    This is called storage-class memory (SCM) and is geared for storage arrays, not servers. It is in fact all in mirrored DRAM before it gets to the flash - it also first gets dedupped and compressed. Of course the storage arrays it's made for have like 10TB of RAM for cache and hundreds of xeon cpus. We are not talking about a RAID card here. The block size on most arrays is indeed 520 - you are correct in that the host only sees 512 of that. The host is far away from it though. Before the host sees a virtual disk that's spread over all the disks on the array, we have 32Gb front-end adapters, and NVMe bus, and believe it or not, the DIMMs traverse that NVMe bus along with regular SSDs before the data comes out the front-end ports. There are multiple levels of leveling, caching, and data protection along the way. If you are interested in reading more, there is Pure storage on the low end and EMC on the high super-expensive end. Most Fortune 500 companies will have multiple tiers of arrays and mix in all kinds of low to high-end stuff - but we're still talking Racks of storage arrays connected to a SAN - not servers or RAID cards.