Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday May 31 2018, @03:28PM   Printer-friendly
from the I'm-still-trying-to-master-Donkey-Kong dept.

Intel has announced 3D XPoint DIMMs ranging from 128 GB to 512 GB per module:

Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

Also at Tom's Hardware and Ars Technica.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Friday June 01 2018, @04:05PM (3 children)

    by DannyB (5839) Subscriber Badge on Friday June 01 2018, @04:05PM (#687291) Journal

    They want people to directly memory map these things into a running program's address space and reap the advantage of not having to bother with block layers, filesytems or a SAN.

    That is my understanding as well.

    If I am understanding this correctly, we finally have the holy grail of memory that is fast, cheap and non volatile. Or said differently, block storage that becomes main memory.

    I daydreamed about such a possibility at two points earlier in life. In the 1970's when magnetic bubble memory was being toyed with. In the early 1990's playing with a palm pilot. It didn't have any "storage" per se. The device merely went into a very low power mode. Everything was kept in memory all the time. A true reboot was akin to reformatting the drive.

    I pondered how profoundly would it change system design if storage was as fast as main memory, but as cheap and plentiful as storage. The idea of slow "storage" and fast volatile main memory is DEEPLY baked into the designs of all existing OSes. Although Linux is perhaps more adaptable here than most.

    Am I fundamentally misunderstanding this development?

    There once was an earlier time with magnetic core memory what was non volatile. Problems: Extremely Expensive and manual labor intensive to fabricate. Extremely limited capacities. Slow. No wonder semiconductor memory took off.

    And storage was non volatile, reliable, cheap (relatively speaking) and very slow compared to main memory. Our system designs for decades have been deeply tied to the basic assumption that block storage is slow, massive and cheap, but main memory is fast, volatile and expensive.

    If storage IS main memory, will we still have "file systems"? The notion of organizing things into folders and files is pretty deeply baked into our thinking. Even "non-computer people".

    --
    The lower I set my standards the more accomplishments I have.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by jmorris on Friday June 01 2018, @05:56PM (1 child)

    by jmorris (4844) on Friday June 01 2018, @05:56PM (#687357)

    We still don't have the grail HP is looking for to build The Machine. This stuff is fast but not as fast as the DDR4 that could be put in the same socket. What it has is size and persistence over power cycles. They don't make 512GB DDR4 sticks yet. But it shares the downside of all of the solid state persistent memory techs currently in use, shockingly low write endurance, which makes it entirely unsuitable as main memory.

    So we still have RAM and primary storage, this stuff is just really fast and most important can be directly mapped into a process. Worrying about NUMA is nothing compared to read/write file access. Even taking care to minimize the writes is bearable for a lot of workloads.

    We will still have filesystems, they might look very different in the future. Palm didn't use a traditional filesystem but it was doing most of the same things as one and the line was blurring by the time they abandoned PalmOS for yet another front end atop Linux.

    Never got a chance to put my hands on bubble memory but did follow it closely. It was never a contender for RAM. It was block oriented, had transfer speeds more in the range of a hard drive. What it had was no seek latency (which we should remember was horrid, seek + dwell could take a hundred or more milliseconds, an eternity for computers) and no moving parts which attracted aerospace customers and others who needed computing in bumpy environments.

    • (Score: 2) by DannyB on Friday June 01 2018, @08:47PM

      by DannyB (5839) Subscriber Badge on Friday June 01 2018, @08:47PM (#687449) Journal

      Thanks.

      Only closer to that holy grail.

      I am left to wonder, if we ever get something that plugs in as "main memory" but is persistent across power loss, how it will radically affect the design of systems.

      --
      The lower I set my standards the more accomplishments I have.
  • (Score: 0) by fakefuck39 on Saturday June 02 2018, @05:20AM

    by fakefuck39 (6620) on Saturday June 02 2018, @05:20AM (#687588)

    You are fundamentally misunderstanding this tech. It's not meant for a server, although you could in theory use it for a server. The bulk of these chips will go to serve as sub-millisecond response time storage, which the server sees as regular disk. This is meant for the backend, which itself runs a complex OS, using Xeon chips, over an Infiniband bus. As an example, the top array right now that will be using these has 16TB of DRAM, backed by 3DXP DIMMS. It's likely to be hooked up to something like a Cisco UCS, running a few thousand VMs.

    But yes, you can use it internally in some stand-alone server with some custom code. Given how expensive these will be (the array using these is ~$10mil), I doubt anyone not using enterprise-grade storage will though. From your point of view, these will just look like a really fast normal LUN presented to your server over a couple of 32Gb/s SAN connections.