Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday May 31 2018, @03:28PM   Printer-friendly
from the I'm-still-trying-to-master-Donkey-Kong dept.

Intel has announced 3D XPoint DIMMs ranging from 128 GB to 512 GB per module:

Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

Also at Tom's Hardware and Ars Technica.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by fakefuck39 on Thursday May 31 2018, @11:11PM (9 children)

    by fakefuck39 (6620) on Thursday May 31 2018, @11:11PM (#686978)

    I literally did a presentation today on the availability of this on EMC arrays this year and how it works. Either read more than a specs sheet to get your information or get a job where you're taught these things.

  • (Score: 2) by jmorris on Friday June 01 2018, @12:57AM (8 children)

    by jmorris (4844) on Friday June 01 2018, @12:57AM (#687006)

    I have no idea what YOU are playing with, but I'm reading Intel's own words about the product line. They aren't positioning it for storage arrays and other deeply nested SAN/networked stuff that would kill the primary advantage of putting mass storage in a DDR4 slot. Latency is the dragon they are aiming to slay here. They want people to directly memory map these things into a running program's address space and reap the advantage of not having to bother with block layers, filesytems or a SAN. They are supplying example code to use with C, C++, Java and even frickin' Python. The closest to directly networked would be using it for a memcached use case or the example they mention of a simple key/value store with yuge speed and size as the advantages.

    • (Score: 2) by jmorris on Friday June 01 2018, @12:58AM

      by jmorris (4844) on Friday June 01 2018, @12:58AM (#687007)

      https://software.intel.com/en-us/persistent-memory [intel.com]

      Pooched the hyperlink. Preview is a good thing.

    • (Score: 0) by fakefuck39 on Friday June 01 2018, @01:18AM (2 children)

      by fakefuck39 (6620) on Friday June 01 2018, @01:18AM (#687013)

      I'm not playing with anything. The enterprise storage arrays coming out at the end of this year and next year all use SCM. The presentation I did today was specifically on the EMC PowerMax array, which will have this exact module from Intel available in q3.

      from your link: "This new memory type, sometimes called storage class memory."

      The programming specific to this type of memory is for a storage array operating system, not your host. If you are talking about servers not sitting on a SAN, well companies don't have those. Yes, you might have some little shop you work for that does it's little thing. Those might use storage-class memory for servers. They are also statistically irrelevant - data, memory, and compute is located at large enterprises, and your use case is a rounding error. When we talk about something, we talk about what people use it for, and by people I mean 99% of use cases - enterprise-level corporations, not some dinky-dink little scenario you come up with in your little bit of C-code no one knows about.

      • (Score: 2) by jmorris on Friday June 01 2018, @01:38AM (1 child)

        by jmorris (4844) on Friday June 01 2018, @01:38AM (#687021)

        Ok, good to know Intel doesn't have a clue what their products are actually used for. [/sarc]

        • (Score: 0) by fakefuck39 on Friday June 01 2018, @09:59PM

          by fakefuck39 (6620) on Friday June 01 2018, @09:59PM (#687467)

          Oh, they do, and so do their business partners. Unfortunately watching some video on a website and reading a 5-point bullet summary doesn't mean You have any idea.

          I've always wondered why there are adults working PC Support or competing for coding jobs with some Indians who don't know how to use toilet paper. You just explained it to me. Some idiot can spend 5 minutes reading some little blurb on the internet and think himself an expert on something. He then stops digging deeper, because "he's supa smart" and is already an expert. Further information - discard it, that little blurb I read is all I need to know on the subject. This of course produces a resource who thinks he know everything, actually knows nothing, and is unwilling to learn. And that's how you have 40 year old people doing the needful job of an intern.

          Seriously - please stay the retard you are. It drives my salary up by reducing the competition. In the meanwhile I'll be configuring storage arrays with petabytes of these things at some of the largest global companies. And seriously - look up how a 512byte block uses 520 bytes on storage arrays. Even the 40 year old PC Support guy has know this for 20 years. That's just embarrassing.

    • (Score: 2) by DannyB on Friday June 01 2018, @04:05PM (3 children)

      by DannyB (5839) Subscriber Badge on Friday June 01 2018, @04:05PM (#687291) Journal

      They want people to directly memory map these things into a running program's address space and reap the advantage of not having to bother with block layers, filesytems or a SAN.

      That is my understanding as well.

      If I am understanding this correctly, we finally have the holy grail of memory that is fast, cheap and non volatile. Or said differently, block storage that becomes main memory.

      I daydreamed about such a possibility at two points earlier in life. In the 1970's when magnetic bubble memory was being toyed with. In the early 1990's playing with a palm pilot. It didn't have any "storage" per se. The device merely went into a very low power mode. Everything was kept in memory all the time. A true reboot was akin to reformatting the drive.

      I pondered how profoundly would it change system design if storage was as fast as main memory, but as cheap and plentiful as storage. The idea of slow "storage" and fast volatile main memory is DEEPLY baked into the designs of all existing OSes. Although Linux is perhaps more adaptable here than most.

      Am I fundamentally misunderstanding this development?

      There once was an earlier time with magnetic core memory what was non volatile. Problems: Extremely Expensive and manual labor intensive to fabricate. Extremely limited capacities. Slow. No wonder semiconductor memory took off.

      And storage was non volatile, reliable, cheap (relatively speaking) and very slow compared to main memory. Our system designs for decades have been deeply tied to the basic assumption that block storage is slow, massive and cheap, but main memory is fast, volatile and expensive.

      If storage IS main memory, will we still have "file systems"? The notion of organizing things into folders and files is pretty deeply baked into our thinking. Even "non-computer people".

      --
      To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
      • (Score: 2) by jmorris on Friday June 01 2018, @05:56PM (1 child)

        by jmorris (4844) on Friday June 01 2018, @05:56PM (#687357)

        We still don't have the grail HP is looking for to build The Machine. This stuff is fast but not as fast as the DDR4 that could be put in the same socket. What it has is size and persistence over power cycles. They don't make 512GB DDR4 sticks yet. But it shares the downside of all of the solid state persistent memory techs currently in use, shockingly low write endurance, which makes it entirely unsuitable as main memory.

        So we still have RAM and primary storage, this stuff is just really fast and most important can be directly mapped into a process. Worrying about NUMA is nothing compared to read/write file access. Even taking care to minimize the writes is bearable for a lot of workloads.

        We will still have filesystems, they might look very different in the future. Palm didn't use a traditional filesystem but it was doing most of the same things as one and the line was blurring by the time they abandoned PalmOS for yet another front end atop Linux.

        Never got a chance to put my hands on bubble memory but did follow it closely. It was never a contender for RAM. It was block oriented, had transfer speeds more in the range of a hard drive. What it had was no seek latency (which we should remember was horrid, seek + dwell could take a hundred or more milliseconds, an eternity for computers) and no moving parts which attracted aerospace customers and others who needed computing in bumpy environments.

        • (Score: 2) by DannyB on Friday June 01 2018, @08:47PM

          by DannyB (5839) Subscriber Badge on Friday June 01 2018, @08:47PM (#687449) Journal

          Thanks.

          Only closer to that holy grail.

          I am left to wonder, if we ever get something that plugs in as "main memory" but is persistent across power loss, how it will radically affect the design of systems.

          --
          To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
      • (Score: 0) by fakefuck39 on Saturday June 02 2018, @05:20AM

        by fakefuck39 (6620) on Saturday June 02 2018, @05:20AM (#687588)

        You are fundamentally misunderstanding this tech. It's not meant for a server, although you could in theory use it for a server. The bulk of these chips will go to serve as sub-millisecond response time storage, which the server sees as regular disk. This is meant for the backend, which itself runs a complex OS, using Xeon chips, over an Infiniband bus. As an example, the top array right now that will be using these has 16TB of DRAM, backed by 3DXP DIMMS. It's likely to be hooked up to something like a Cisco UCS, running a few thousand VMs.

        But yes, you can use it internally in some stand-alone server with some custom code. Given how expensive these will be (the array using these is ~$10mil), I doubt anyone not using enterprise-grade storage will though. From your point of view, these will just look like a really fast normal LUN presented to your server over a couple of 32Gb/s SAN connections.