from the I'm-still-trying-to-master-Donkey-Kong dept.
Intel has announced 3D XPoint DIMMs ranging from 128 GB to 512 GB per module:
Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.
The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.
The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.
Also at Tom's Hardware and Ars Technica.
Related Stories
What Next for 3D XPoint? Micron to Buy Intel's Share in 3D XPoint Fab
Micron on Thursday announced plans to acquire Intel's stake in IM Flash Technologies, a joint venture between the two companies. IM Flash owns a fab near Lehi, Utah, which is the only producer of 3DXPoint memory that Intel uses for its premium Optane-branded solid-state storage products. Once the transaction is completed, Intel will have to ink a supply agreement with Micron to get 3D XPoint memory after the current agreement finishes at the end of 2019. This will have important ramifications for Intel's 3D XPoint-based portfolio.
Under the terms of the joint venture agreement between Intel and Micron signed in 2005, the latter controls 51% of company and has a right to acquire the remaining share under certain conditions. Intel already sold Micron its stakes in IM Flash fabs in Singapore and Virginia back in 2012, which left IM Flash with only one production facility near Lehi, Utah (pictured below). The fab is used exclusively to produce 3D XPoint memory right now.
[...] While Intel will continue to obtain 3D XPoint from IM Flash until at least mid-2020, there is a big catch. The two companies are set to finish development of their 2nd Gen 3D XPoint [sometime] in the second or the third quarter of calendar 2019. The joint development takes place in IM Flash R&D facilities and the design is tailored for the IM Flash fab and jointly-developed process technology. Therefore, the transaction may potentially affect Intel's ramp up plans for the 2nd Gen 3D XPoint memory. In fact, Intel can manufacture 3D XPoint memory at Fab 68 in Dalian, China, the company said earlier this year. However, since the fab is busy making 3D NAND, Intel may have to adjust its production plans for both types of memory.
Related: Intel and Micron Boost 3D XPoint Production
Intel Announces 3D XPoint Persistent Memory DIMMs
Micron: 96-Layer 3D NAND Coming, 3D XPoint Sales Disappoint
(Score: 2) by ledow on Thursday May 31 2018, @03:38PM (2 children)
I'm just looking towards the day that when I upgrade my storage, my RAM gets a free upgrade too.
(Score: 3, Insightful) by takyon on Thursday May 31 2018, @04:01PM (1 child)
Today is not that day :(
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: -1, Troll) by Anonymous Coward on Thursday May 31 2018, @05:12PM
i signed mdcrawford@gmail.com up for jonas brothers fan club
good luck remembering majordomo commands nerd
(Score: 5, Funny) by DannyB on Thursday May 31 2018, @04:28PM
640 GB.
That ought to be enough for anybody.
(except a Java developer)
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 2, Interesting) by Anonymous Coward on Thursday May 31 2018, @04:54PM (1 child)
will these next gen xeons come with a untrustable management engine, etc. included? if you think i'm going to give you thousands of dollars for a server i can never secure, you're mistaken. Also, you should know that you are forging dedicated enemies as you continue your subjugation of the human race.
(Score: 2) by DannyB on Thursday May 31 2018, @06:38PM
No it will not. Marketing does not like the branding "untrustable", and so it will be dropped over the objections of the engineers. "trustable" will be substituted instead. So that is what it will come with. You can trust it because it says you can, right on the package.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 4, Disagree) by jmorris on Thursday May 31 2018, @05:37PM (16 children)
This isn't RAM by any stretch of the marketing hype. It is just an SSD bypassing the overhead of the PCIe interface and by using a blocksize that matches the cache row avoiding the block layers as well. But it still has less than 10K write endurance so you will be putting real RAM into a system and carefully segmenting off the OS from assigning any of this device's address space to a normal process. The dream of the persistent ram required for The Machine still eludes HP.
As a storage device it will have fairly impressive performance numbers. The advertised ones make me wonder if the damned things even have time enough to do more than raw error correction and will offload the rest of the wear leveling to the OS. Intel is always looking for ways to slow down a CPU after all.
(Score: 2) by DannyB on Thursday May 31 2018, @06:40PM (1 child)
If I am reading this right, it is NOT just current SSD. It is implemented differently. It is much faster than SSD. Fast enough to be direct memory for the microprocessor.
Can anyone elaborate further?
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 3, Interesting) by takyon on Thursday May 31 2018, @08:19PM
https://www.theregister.co.uk/2018/04/26/chinese_scientists_claim_xpoint_is_phase_change_memory/ [theregister.co.uk]
Unclear, but widely assumed to be phase change memory (PCM).
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: -1, Interesting) by fakefuck39 on Thursday May 31 2018, @07:14PM (12 children)
This is a RAM chip with baked-in Flash, used as disk - mostly for large enterprise storage arrays. What's new with this is the response time is completely flat as write load increases. NVMe alone optimizes the reads vs SAS, but does almost nothing for writes because it's just flash at the end. This optimizes for writes and gives the same write IO response time as load increases. If by "OS" you mean the array firmware, then yes - wear leveling and error correction will be, and always has been handled by the storage array - not the drive. All the drive has is 520 bytes per 512 byte block. Array error correction is applied on top of that. As an example, take a look at EMC's Symmetrix line which will start using these in q3.
(Score: 2) by jmorris on Thursday May 31 2018, @08:47PM (11 children)
No, Flash + RAM + Supercap was the early test pieces to develop the software stack for this tech. This is a new product, it isn't flash based and there is no ram backing it. If they could make a 512GB stick of DDR4 RAM it would be a sought after product in its own right. While faster than any existing SSD it isn't as fast as real ram and it has a write endurance. The official spec sheets speak of less than 10K lifetime writes but see other post in this thread saying it might endure as many as 50K. Need a lot more zeros to get into RAM territory though.
No, there will be no array firmware here either. It sits in a DDR4 ram socket directly connected to the CPU. It is so fast it is doubtful any embedded controller is touching the data, only an ECC controller and with talk of only working with new Intel processors that have "support" for it they could very well be simply doing two bits of ECC and offloading basic error correction to the processor and chipset as well. There are also no 520 or 512 byte blocks, the block size is the same as a cache line so it can be directly memory mapped into the processor's address space. Basically software at the OS level is going to have to do all of the heavy lifting here.
(Score: 0) by fakefuck39 on Thursday May 31 2018, @11:11PM (9 children)
I literally did a presentation today on the availability of this on EMC arrays this year and how it works. Either read more than a specs sheet to get your information or get a job where you're taught these things.
(Score: 2) by jmorris on Friday June 01 2018, @12:57AM (8 children)
I have no idea what YOU are playing with, but I'm reading Intel's own words about the product line. They aren't positioning it for storage arrays and other deeply nested SAN/networked stuff that would kill the primary advantage of putting mass storage in a DDR4 slot. Latency is the dragon they are aiming to slay here. They want people to directly memory map these things into a running program's address space and reap the advantage of not having to bother with block layers, filesytems or a SAN. They are supplying example code to use with C, C++, Java and even frickin' Python. The closest to directly networked would be using it for a memcached use case or the example they mention of a simple key/value store with yuge speed and size as the advantages.
(Score: 2) by jmorris on Friday June 01 2018, @12:58AM
https://software.intel.com/en-us/persistent-memory [intel.com]
Pooched the hyperlink. Preview is a good thing.
(Score: 0) by fakefuck39 on Friday June 01 2018, @01:18AM (2 children)
I'm not playing with anything. The enterprise storage arrays coming out at the end of this year and next year all use SCM. The presentation I did today was specifically on the EMC PowerMax array, which will have this exact module from Intel available in q3.
from your link: "This new memory type, sometimes called storage class memory."
The programming specific to this type of memory is for a storage array operating system, not your host. If you are talking about servers not sitting on a SAN, well companies don't have those. Yes, you might have some little shop you work for that does it's little thing. Those might use storage-class memory for servers. They are also statistically irrelevant - data, memory, and compute is located at large enterprises, and your use case is a rounding error. When we talk about something, we talk about what people use it for, and by people I mean 99% of use cases - enterprise-level corporations, not some dinky-dink little scenario you come up with in your little bit of C-code no one knows about.
(Score: 2) by jmorris on Friday June 01 2018, @01:38AM (1 child)
Ok, good to know Intel doesn't have a clue what their products are actually used for. [/sarc]
(Score: 0) by fakefuck39 on Friday June 01 2018, @09:59PM
Oh, they do, and so do their business partners. Unfortunately watching some video on a website and reading a 5-point bullet summary doesn't mean You have any idea.
I've always wondered why there are adults working PC Support or competing for coding jobs with some Indians who don't know how to use toilet paper. You just explained it to me. Some idiot can spend 5 minutes reading some little blurb on the internet and think himself an expert on something. He then stops digging deeper, because "he's supa smart" and is already an expert. Further information - discard it, that little blurb I read is all I need to know on the subject. This of course produces a resource who thinks he know everything, actually knows nothing, and is unwilling to learn. And that's how you have 40 year old people doing the needful job of an intern.
Seriously - please stay the retard you are. It drives my salary up by reducing the competition. In the meanwhile I'll be configuring storage arrays with petabytes of these things at some of the largest global companies. And seriously - look up how a 512byte block uses 520 bytes on storage arrays. Even the 40 year old PC Support guy has know this for 20 years. That's just embarrassing.
(Score: 2) by DannyB on Friday June 01 2018, @04:05PM (3 children)
That is my understanding as well.
If I am understanding this correctly, we finally have the holy grail of memory that is fast, cheap and non volatile. Or said differently, block storage that becomes main memory.
I daydreamed about such a possibility at two points earlier in life. In the 1970's when magnetic bubble memory was being toyed with. In the early 1990's playing with a palm pilot. It didn't have any "storage" per se. The device merely went into a very low power mode. Everything was kept in memory all the time. A true reboot was akin to reformatting the drive.
I pondered how profoundly would it change system design if storage was as fast as main memory, but as cheap and plentiful as storage. The idea of slow "storage" and fast volatile main memory is DEEPLY baked into the designs of all existing OSes. Although Linux is perhaps more adaptable here than most.
Am I fundamentally misunderstanding this development?
There once was an earlier time with magnetic core memory what was non volatile. Problems: Extremely Expensive and manual labor intensive to fabricate. Extremely limited capacities. Slow. No wonder semiconductor memory took off.
And storage was non volatile, reliable, cheap (relatively speaking) and very slow compared to main memory. Our system designs for decades have been deeply tied to the basic assumption that block storage is slow, massive and cheap, but main memory is fast, volatile and expensive.
If storage IS main memory, will we still have "file systems"? The notion of organizing things into folders and files is pretty deeply baked into our thinking. Even "non-computer people".
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 2) by jmorris on Friday June 01 2018, @05:56PM (1 child)
We still don't have the grail HP is looking for to build The Machine. This stuff is fast but not as fast as the DDR4 that could be put in the same socket. What it has is size and persistence over power cycles. They don't make 512GB DDR4 sticks yet. But it shares the downside of all of the solid state persistent memory techs currently in use, shockingly low write endurance, which makes it entirely unsuitable as main memory.
So we still have RAM and primary storage, this stuff is just really fast and most important can be directly mapped into a process. Worrying about NUMA is nothing compared to read/write file access. Even taking care to minimize the writes is bearable for a lot of workloads.
We will still have filesystems, they might look very different in the future. Palm didn't use a traditional filesystem but it was doing most of the same things as one and the line was blurring by the time they abandoned PalmOS for yet another front end atop Linux.
Never got a chance to put my hands on bubble memory but did follow it closely. It was never a contender for RAM. It was block oriented, had transfer speeds more in the range of a hard drive. What it had was no seek latency (which we should remember was horrid, seek + dwell could take a hundred or more milliseconds, an eternity for computers) and no moving parts which attracted aerospace customers and others who needed computing in bumpy environments.
(Score: 2) by DannyB on Friday June 01 2018, @08:47PM
Thanks.
Only closer to that holy grail.
I am left to wonder, if we ever get something that plugs in as "main memory" but is persistent across power loss, how it will radically affect the design of systems.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 0) by fakefuck39 on Saturday June 02 2018, @05:20AM
You are fundamentally misunderstanding this tech. It's not meant for a server, although you could in theory use it for a server. The bulk of these chips will go to serve as sub-millisecond response time storage, which the server sees as regular disk. This is meant for the backend, which itself runs a complex OS, using Xeon chips, over an Infiniband bus. As an example, the top array right now that will be using these has 16TB of DRAM, backed by 3DXP DIMMS. It's likely to be hooked up to something like a Cisco UCS, running a few thousand VMs.
But yes, you can use it internally in some stand-alone server with some custom code. Given how expensive these will be (the array using these is ~$10mil), I doubt anyone not using enterprise-grade storage will though. From your point of view, these will just look like a really fast normal LUN presented to your server over a couple of 32Gb/s SAN connections.
(Score: 0) by Anonymous Coward on Thursday May 31 2018, @11:46PM
This is called storage-class memory (SCM) and is geared for storage arrays, not servers. It is in fact all in mirrored DRAM before it gets to the flash - it also first gets dedupped and compressed. Of course the storage arrays it's made for have like 10TB of RAM for cache and hundreds of xeon cpus. We are not talking about a RAID card here. The block size on most arrays is indeed 520 - you are correct in that the host only sees 512 of that. The host is far away from it though. Before the host sees a virtual disk that's spread over all the disks on the array, we have 32Gb front-end adapters, and NVMe bus, and believe it or not, the DIMMs traverse that NVMe bus along with regular SSDs before the data comes out the front-end ports. There are multiple levels of leveling, caching, and data protection along the way. If you are interested in reading more, there is Pure storage on the low end and EMC on the high super-expensive end. Most Fortune 500 companies will have multiple tiers of arrays and mix in all kinds of low to high-end stuff - but we're still talking Racks of storage arrays connected to a SAN - not servers or RAID cards.
(Score: 4, Informative) by takyon on Thursday May 31 2018, @08:18PM
https://thememoryguy.com/examining-3d-xpoints-1000-times-endurance-benefit/ [thememoryguy.com]
They may be conservatively underestimating the write endurance. From the AnandTech article:
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]