Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Wednesday July 29 2015, @05:21PM   Printer-friendly
from the smaller-yet-bigger dept.

Intel and Micron have announced a new type of non-volatile memory called "3D XPoint", which they say is 1,000 times faster (in terms of latency) than the NAND flash used in solid-state disks, with 1,000 times the endurance. It also has 10 times the density of DRAM. It is a stackable, 20nm, technology, and is expected to be sold next year in a 128 Gb (16 GB) size:

If all goes to plan, the first products to feature 3D XPoint (pronounced cross-point) will go on sale next year. Its price has yet to be announced. Intel is marketing it as the first new class of "mainstream memory" since 1989. Rather than pitch it as a replacement for either flash storage or Ram (random access memory), the company suggests it will be used alongside them to hold certain data "closer" to a processor so that it can be accessed more quickly than before.

[...] 3D XPoint does away with the need to use the transistors at the heart of Nand chips... By contrast, 3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero. The advantage is that each memory cell can be addressed individually, radically speeding things up. An added benefit is that it should last hundreds of times longer than Nand before becoming unreliable.

It is expected to be more expensive than NAND, cheaper than DRAM, and slower than DRAM. If a 16 GB chip is the minimum XPoint offering, it could be used to store an operating system and certain applications for a substantial speedup compared to SSD storage.

This seems likely to beat similar fast and non-volatile "NAND-killers" to market, such as memristors and Crossbar RRAM. Intel and Micron have worked on phase-change memory (PCM) previously, but Intel has denied that XPoint is a PCM, memristor, or spin-transfer torque based technology. The Platform speculates that the next-generation 100+ petaflops supercomputers will utilize XPoint, along with other applications facing memory bottlenecks such as genomics analysis and gaming. The 16 GB chip is a simple 2-layer stack, compared to 32 layers for Samsung's available V-NAND SSDs, so there is enormous potential for capacity growth.

The technology will be sampling later this year to potential customers. Both Micron and Intel will develop their own 3D XPoint products, and will not be licensing the technology.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Aichon on Wednesday July 29 2015, @09:44PM

    by Aichon (5059) on Wednesday July 29 2015, @09:44PM (#215621)

    Yup, 16GB is overkill for many users. And for others, it can be orders of magnitude too small.

    For instance, when I was doing my grad research in 2008, we had a machine with 16GB of RAM, which was quite a bit less than I wanted, given that I was analyzing a 7.8TB data set (representing a web crawl of 6.3B pages, which was at the time the largest in academia) that had been stored as a single file in an undocumented, proprietary format created by an earlier grad student. Oh, and that format couldn't be randomly accessed, meaning that if I wanted to get at a piece of data that was at the end of the 7.8TB file, I'd first have to read in and parse the preceding 7.8TB in order to know which bit the data I was interested in started at. And there wasn't enough space in the RAID to break the file up and store it in chunks.

    It got really bad when they asked me to figure out for each domain which other domains linked to the ones that linked to them (i.e. for a given domain, find the "supporter" domains that linked to the "neighbor" domains that linked to the given one), since the easiest algorithms for that problem assume that you can store the entire data set in memory. Instead, I think we ended up having to design an algorithm that iterated across the entire 7.8TB data set about 500 times (i.e. 7.8TB / 16GB), with each full read of the data taking roughly 4 hours. All of which was quite doable, of course, but it would have been GREATLY simplified if I could have had quite a bit more RAM.

    These days, the only time I tend to have issues on my 8GB machine at home is when doing photo or video editing for fun, which makes sense, since those files can be massive and the professionals in those fields are known for chewing through RAM like nothing else.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday July 29 2015, @10:21PM

    by Anonymous Coward on Wednesday July 29 2015, @10:21PM (#215635)

    A set of memory mapped files and an index probably would have help you out a decent amount.

    • (Score: 2) by Aichon on Thursday July 30 2015, @02:52PM

      by Aichon (5059) on Thursday July 30 2015, @02:52PM (#215898)

      Agreed, but I was young(er) and didn't know any better. ;)