Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday July 31 2015, @12:43PM   Printer-friendly
from the cough-choke dept.

The 3D design uses a transistor-less cross point architecture to create a 3D design of interconnects, where memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually. This means data can be read or written to and from the actual cells containing data and not the whole chip containing relevant cells.

Beyond that, though, we don't know much about the memory, like exactly what kind of memory it is. Is it phase change memory, ReRAM, MRAM or some other kind of memory? The two won't say. The biggest unanswered question in my mind is the bus for this new memory, which is supposed to start coming to market next year. The SATA III bus used by virtually all motherboards is already considered saturated. PCI Express is a faster alternative assuming you have the lanes for the data.

Making memory 1000 times faster isn't very useful if it chokes on the I/O bus, which is exactly what will happen if they use existing technology. It would be like a one-lane highway with no speed limit.

It needs a new use model. It can't be positioned as a hard drive alternative because the interfaces will choke it. So the question becomes what do they do? Clearly they need to come up with their own bus. Jim Handy, an analyst who follows the memory space, thinks it will be an SRAM interface. SRAM is used in CPU caches. This would mean the 3D XPoint memory would talk directly to the CPU.

"The beauty of an SRAM interface is that its really, really fast. What's not nice is it has a high pin count," he told me.

He also likes the implementation from Diablo Technologies, which basically built SSD drives in the shape of DDR3 memory sticks that plug into your motherboard memory slots. This lets the drives talk to the CPU at the speed of memory and not a hard drive.

One thing is for sure, the bus will be what makes or breaks 3D XPoint, because what good is a fast read if it chokes on the I/O interface?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday July 31 2015, @01:05PM

    by Anonymous Coward on Friday July 31 2015, @01:05PM (#216285)

    It is not clear what they are going to position this memory as.

    Is it a replacement for NAND? Not really as it is not big enough.

    Is it a replacement for DRAM? Not really it is not fast enough and still has a write limit.

    It sits somewhere between the two.

    So it looks like it is meant to be a cache sort of memory for high r/w sans or in existing SSDs or aimed at the mobile market where 1 less chip can really help on the BOM cost.

    The sentence "1,000 times faster (in terms of latency) than the NAND flash used in solid-state disks, with 1,000 times the endurance. It also has 10 times the density of DRAM." seems to be deliberately misleading. As it is comparing 2 different things. But making those 2 seem like the same thing. It first talks about the latency on the flash then switches it over to the size vs DRAM. Which if you dig a bit means smaller than NAND and slower than DRAM.

    Dont get me wrong. It is cool stuff. If they can get the density higher it would be a killer replacement for NAND.

    I still think they announced this to fend off the Chinese firm that is trying to buy Micron or get a better price.

  • (Score: 0) by Anonymous Coward on Friday July 31 2015, @01:10PM

    by Anonymous Coward on Friday July 31 2015, @01:10PM (#216289)

    You lost me...much I guess like you were lost by the article's use of compound sentences?

    "As it is comparing 2 different things. But making those 2 seem like the same thing."

    I don't see how that follows. "A dog and a cat ran behind my house earlier today!" doesn't make the dog sound like the same thing as the cat, and is in no way misleading.

    • (Score: 0) by Anonymous Coward on Friday July 31 2015, @01:33PM

      by Anonymous Coward on Friday July 31 2015, @01:33PM (#216299)

      Here let me lay it out for you. It is marketing speak. It is meant to confuse the reader. It is meant to make it seem like it as fast as RAM and bigger. So it can replace DRAM. When I first read it that is what went thru my head (and I was not the only one).

      It is not the use of a compound sentence. It is the twisting that is happening to make it seem like more than it is. As my sentence is just as true but does not sound as good 'it is not as big as current NAND flash and not as fast as DRAM'. But that does not sound as cool.

      It is basically a 'flash' memory that is much faster. With higher densities than DRAM but not as big as what they currently have with NAND.

  • (Score: 2) by kaganar on Friday July 31 2015, @02:27PM

    by kaganar (605) on Friday July 31 2015, @02:27PM (#216318)
    In general, there's a continuum of storage solutions where the faster it is the less space you can have. Initially NAND didn't disrupt this continuum, but it was still useful as another layer of caching or in small-data instances (e.g. high-demand 4GB database). If the claims are to believed here, its initial place will be much the same as NAND's. As its sizes grow, it may supplant NAND since it would disrupt the ratio of speed/space that is common just as NAND did.
  • (Score: 2) by takyon on Friday July 31 2015, @03:49PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday July 31 2015, @03:49PM (#216350) Journal

    It will definitely live in a tier between RAM and NAND [theregister.co.uk].

    I explained every known relationship between XPoint, NAND, and DRAM in my summary.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Friday July 31 2015, @05:44PM

    by Anonymous Coward on Friday July 31 2015, @05:44PM (#216413)

    They will probably not just go this simple route.

    A great use would be small dedicated drives as write cache e.g., ZIL (simply due to the longevity vs. NAND flash). And/or, small integrated write cache integrated with conventional SSDs so power loss is not an issue for the cache, and a large number of random writes can be cached and reordered to minimize writes to the more fragile NAND flash in the drive.

  • (Score: 2) by etherscythe on Friday July 31 2015, @06:11PM

    by etherscythe (937) on Friday July 31 2015, @06:11PM (#216438) Journal

    How about: it replaces the "Intel hybrid RAID mode" which is the SSD cache with a full-size hard drive. All your running data/programs/core OS files gets copied to this new storage device, and then you have an instant-on instant-off perfect sleep mode. No more Hibernate restoring from the hard drive. No more power draw when the device isn't being used. No more rebooting unless you have OS patches to apply. Battery life for mobile devices measured in MONTHS. It's ambitious, and as much as I hate some of Intel's business practices, I can't help being impressed by the possibilities here.

    --
    "Fake News: anything reported outside of my own personally chosen echo chamber"