Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Friday July 31 2015, @12:43PM   Printer-friendly
from the cough-choke dept.

The 3D design uses a transistor-less cross point architecture to create a 3D design of interconnects, where memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually. This means data can be read or written to and from the actual cells containing data and not the whole chip containing relevant cells.

Beyond that, though, we don't know much about the memory, like exactly what kind of memory it is. Is it phase change memory, ReRAM, MRAM or some other kind of memory? The two won't say. The biggest unanswered question in my mind is the bus for this new memory, which is supposed to start coming to market next year. The SATA III bus used by virtually all motherboards is already considered saturated. PCI Express is a faster alternative assuming you have the lanes for the data.

Making memory 1000 times faster isn't very useful if it chokes on the I/O bus, which is exactly what will happen if they use existing technology. It would be like a one-lane highway with no speed limit.

It needs a new use model. It can't be positioned as a hard drive alternative because the interfaces will choke it. So the question becomes what do they do? Clearly they need to come up with their own bus. Jim Handy, an analyst who follows the memory space, thinks it will be an SRAM interface. SRAM is used in CPU caches. This would mean the 3D XPoint memory would talk directly to the CPU.

"The beauty of an SRAM interface is that its really, really fast. What's not nice is it has a high pin count," he told me.

He also likes the implementation from Diablo Technologies, which basically built SSD drives in the shape of DDR3 memory sticks that plug into your motherboard memory slots. This lets the drives talk to the CPU at the speed of memory and not a hard drive.

One thing is for sure, the bus will be what makes or breaks 3D XPoint, because what good is a fast read if it chokes on the I/O interface?


Original Submission

Related Stories

Intel and Micron Announce 3D XPoint, A New Type of Memory and Storage 17 comments

Intel and Micron have announced a new type of non-volatile memory called "3D XPoint", which they say is 1,000 times faster (in terms of latency) than the NAND flash used in solid-state disks, with 1,000 times the endurance. It also has 10 times the density of DRAM. It is a stackable, 20nm, technology, and is expected to be sold next year in a 128 Gb (16 GB) size:

If all goes to plan, the first products to feature 3D XPoint (pronounced cross-point) will go on sale next year. Its price has yet to be announced. Intel is marketing it as the first new class of "mainstream memory" since 1989. Rather than pitch it as a replacement for either flash storage or Ram (random access memory), the company suggests it will be used alongside them to hold certain data "closer" to a processor so that it can be accessed more quickly than before.

[...] 3D XPoint does away with the need to use the transistors at the heart of Nand chips... By contrast, 3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero. The advantage is that each memory cell can be addressed individually, radically speeding things up. An added benefit is that it should last hundreds of times longer than Nand before becoming unreliable.

It is expected to be more expensive than NAND, cheaper than DRAM, and slower than DRAM. If a 16 GB chip is the minimum XPoint offering, it could be used to store an operating system and certain applications for a substantial speedup compared to SSD storage.

This seems likely to beat similar fast and non-volatile "NAND-killers" to market, such as memristors and Crossbar RRAM. Intel and Micron have worked on phase-change memory (PCM) previously, but Intel has denied that XPoint is a PCM, memristor, or spin-transfer torque based technology. The Platform speculates that the next-generation 100+ petaflops supercomputers will utilize XPoint, along with other applications facing memory bottlenecks such as genomics analysis and gaming. The 16 GB chip is a simple 2-layer stack, compared to 32 layers for Samsung's available V-NAND SSDs, so there is enormous potential for capacity growth.

The technology will be sampling later this year to potential customers. Both Micron and Intel will develop their own 3D XPoint products, and will not be licensing the technology.


Original Submission

Rambus and Gigadrive Form Joint Venture to Commercialize Resistive RAM 6 comments

Rambus, GigaDevice form ReRAM joint venture

Reliance Memory has been formed in Beijing, China to commercialize Resistive Random Access Memory (ReRAM) technology. The company is a joint venture between intellectual property developer Rambus Inc. (Sunnyvale, Calif.), fabless chip company GigaDevice Semiconductor (Beijing) Inc. and multiple venture capital companies. VC companies include THG Ventures, West Summit Capital, Walden International and Zhisland Capital.

The value of the investment was not disclosed but the company is expected to make ReRAM for use in embedded and IoT applications. GigaDevice is a fabless chip company that uses foundries to manufacture non-volatile memory and 32bit microcontrollers.

The Rambus ReRAM technology, previously known as CMOx has a heritage that goes back to Rambus's acquisition of Unity Semiconductor Corp. for $35 million in February 2012. Unity has been working on the technology for a decade, but failed to bring the technology to market. Unity had claimed to have developed a passive rewritable cross-point memory array based on conductive metal oxide. This would provide similarities to filament-based metal migration technologies such as those developed by Adesto Technologies Corp. and Crossbar Inc.

Resistive random-access memory. Yes, that Rambus.

Related: Crossbar 3D Resistive RAM Heads to Commercialization
Intel-Micron's 3D XPoint Memory Lacks Key Details
IBM Demonstrates Phase Change Memory with Multiple Bits Per Cell
HP/HPE's Memristor: Probably Dead
Western Digital and Samsung at the Flash Memory Summit
Fujitsu to Mass Produce Nantero-Licensed NRAM in 2018


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday July 31 2015, @12:56PM

    by Anonymous Coward on Friday July 31 2015, @12:56PM (#216279)

    This 'direct access to individual bits' sounds like core memory [wikipedia.org] to me.

  • (Score: 0) by Anonymous Coward on Friday July 31 2015, @01:05PM

    by Anonymous Coward on Friday July 31 2015, @01:05PM (#216285)

    It is not clear what they are going to position this memory as.

    Is it a replacement for NAND? Not really as it is not big enough.

    Is it a replacement for DRAM? Not really it is not fast enough and still has a write limit.

    It sits somewhere between the two.

    So it looks like it is meant to be a cache sort of memory for high r/w sans or in existing SSDs or aimed at the mobile market where 1 less chip can really help on the BOM cost.

    The sentence "1,000 times faster (in terms of latency) than the NAND flash used in solid-state disks, with 1,000 times the endurance. It also has 10 times the density of DRAM." seems to be deliberately misleading. As it is comparing 2 different things. But making those 2 seem like the same thing. It first talks about the latency on the flash then switches it over to the size vs DRAM. Which if you dig a bit means smaller than NAND and slower than DRAM.

    Dont get me wrong. It is cool stuff. If they can get the density higher it would be a killer replacement for NAND.

    I still think they announced this to fend off the Chinese firm that is trying to buy Micron or get a better price.

    • (Score: 0) by Anonymous Coward on Friday July 31 2015, @01:10PM

      by Anonymous Coward on Friday July 31 2015, @01:10PM (#216289)

      You lost me...much I guess like you were lost by the article's use of compound sentences?

      "As it is comparing 2 different things. But making those 2 seem like the same thing."

      I don't see how that follows. "A dog and a cat ran behind my house earlier today!" doesn't make the dog sound like the same thing as the cat, and is in no way misleading.

      • (Score: 0) by Anonymous Coward on Friday July 31 2015, @01:33PM

        by Anonymous Coward on Friday July 31 2015, @01:33PM (#216299)

        Here let me lay it out for you. It is marketing speak. It is meant to confuse the reader. It is meant to make it seem like it as fast as RAM and bigger. So it can replace DRAM. When I first read it that is what went thru my head (and I was not the only one).

        It is not the use of a compound sentence. It is the twisting that is happening to make it seem like more than it is. As my sentence is just as true but does not sound as good 'it is not as big as current NAND flash and not as fast as DRAM'. But that does not sound as cool.

        It is basically a 'flash' memory that is much faster. With higher densities than DRAM but not as big as what they currently have with NAND.

    • (Score: 2) by kaganar on Friday July 31 2015, @02:27PM

      by kaganar (605) on Friday July 31 2015, @02:27PM (#216318)
      In general, there's a continuum of storage solutions where the faster it is the less space you can have. Initially NAND didn't disrupt this continuum, but it was still useful as another layer of caching or in small-data instances (e.g. high-demand 4GB database). If the claims are to believed here, its initial place will be much the same as NAND's. As its sizes grow, it may supplant NAND since it would disrupt the ratio of speed/space that is common just as NAND did.
    • (Score: 2) by takyon on Friday July 31 2015, @03:49PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday July 31 2015, @03:49PM (#216350) Journal

      It will definitely live in a tier between RAM and NAND [theregister.co.uk].

      I explained every known relationship between XPoint, NAND, and DRAM in my summary.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Friday July 31 2015, @05:44PM

      by Anonymous Coward on Friday July 31 2015, @05:44PM (#216413)

      They will probably not just go this simple route.

      A great use would be small dedicated drives as write cache e.g., ZIL (simply due to the longevity vs. NAND flash). And/or, small integrated write cache integrated with conventional SSDs so power loss is not an issue for the cache, and a large number of random writes can be cached and reordered to minimize writes to the more fragile NAND flash in the drive.

    • (Score: 2) by etherscythe on Friday July 31 2015, @06:11PM

      by etherscythe (937) on Friday July 31 2015, @06:11PM (#216438) Journal

      How about: it replaces the "Intel hybrid RAID mode" which is the SSD cache with a full-size hard drive. All your running data/programs/core OS files gets copied to this new storage device, and then you have an instant-on instant-off perfect sleep mode. No more Hibernate restoring from the hard drive. No more power draw when the device isn't being used. No more rebooting unless you have OS patches to apply. Battery life for mobile devices measured in MONTHS. It's ambitious, and as much as I hate some of Intel's business practices, I can't help being impressed by the possibilities here.

      --
      "Fake News: anything reported outside of my own personally chosen echo chamber"
  • (Score: 3, Insightful) by cloud.pt on Friday July 31 2015, @01:07PM

    by cloud.pt (5516) on Friday July 31 2015, @01:07PM (#216286)

    This is Intel and Micron we are talking about. I don't think they'll have a problem designing a faster bus to take advantage of this memory performance increase. You must take into account two things: for starters, most bus'es nowadays have been made with the bottleneck of I/O devices in mind, not the other way around, because usually it's not the bus that bottlenecks. Now they can really create something that takes into consideration the main functionality of a good bus - raw throughput; and once again, we are talking about Intel and Micron here - besides CPUs and Flash Drives, their main market, they make motherboard reference designs, chipset architectures and I/O controllers on both endpoints. They are both arguably in the top 3 companies on ALL those subjects by units shipped and performance achieved. Are you really bothering they will have trouble using such technology? That's like giving a sniper rifle with 100% accurate automatic target tracking to the best sniper in the world, and saying he won't be able to put it to use because his primary skill is no longer worth a dime. Trust me, nobody is the best at anything without being pretty damn good in a lot of stuff around that particular subject.

  • (Score: 2) by Lagg on Friday July 31 2015, @01:10PM

    by Lagg (105) on Friday July 31 2015, @01:10PM (#216290) Homepage Journal

    We haven't actually had to ask questions like "what about bus speed" for the better part of 10-15 years now. It's a solved problem. This is one of the red flags for me and a few friends. There have been a lot of difficult problems to solve with solid state memory in general but the bus architecture is generally the first one you tackle being the main pipe and all.

    That said, I can still see this actually being a concrete and no-longer theoretical project. If it wasn't for the fact that they want to do this in a top-down fashion. If you knew you were going to have a bottleneck on the bus you'd try to figure out a better bus or a workaround. But because that wasn't addressed the way I'm seeing it is that it's an example of the prototype demo scenario. You know, those situations where you have this huge unreasonable project to do in a tight deadline so your boss tells you to give a tech preview to clients. So you just write up the base functionality to maintain schedule. That's how I'm seeing this. After all, isn't intel critically off schedule with their technology roadmap? Don't they also have a revolutionary new chip technology to pull out of their ass within a year? Why wouldn't an engineer there just condense the spec in its current state into a press release? At this point I'd be more surprised if there wasn't dishonesty because it would mean that they have crunch times from the 9th circle of hell.

    --
    http://lagg.me [lagg.me] 🗿
    • (Score: 2) by schad on Friday July 31 2015, @03:02PM

      by schad (2398) on Friday July 31 2015, @03:02PM (#216330)

      We haven't actually had to ask questions like "what about bus speed" for the better part of 10-15 years now.

      Not true at all. Perhaps true for the things you do, but there are people out there who do actually worry about this stuff and have to design systems around the limits of the various bus speeds. Ask someone who's got a real RAMdisk, that is, a controller attached to battery-backed RAM that plugs into a disk drive slot, what they think of bus speeds.

      If you knew you were going to have a bottleneck on the bus you'd try to figure out a better bus or a workaround.

      The source I read for this (WSJ) said that Intel/Micron noted that if you go through a normal bus -- I assume, but do not know for sure, that they meant SAS/SATA or possibly even PCIe -- that you only get a 10x speedup. To realize the claimed 1000x speedup, it needs to be wired "directly" to the CPU. I don't know if that means it goes through the NB/SB, if it plugs into a DIMM slot, if there's going to be a special plug on the motherboard, or what. They seem to be positioning this not as "faster SSD" but rather as "persistent RAM."

  • (Score: 1) by pvanhoof on Friday July 31 2015, @01:29PM

    by pvanhoof (4638) on Friday July 31 2015, @01:29PM (#216297) Homepage

    So .. basically, we need an implementation of mmap that doesn't use the machine's RAM at all but feeds and addresses pages straight from this new kind of storage device to userland.

  • (Score: 5, Insightful) by TheRaven on Friday July 31 2015, @01:40PM

    by TheRaven (270) on Friday July 31 2015, @01:40PM (#216301) Journal

    First, Intel is claiming DRAM, or close-to-DRAM speeds. That means that you're likely to see it in DIMM slots, which have ample bandwidth. All you need is a CPU that has the correct instructions to ensure that cache lines have been flushed to the persistent memory. Guess which major CPU vendor added these instructions to their CPUs a couple of years ago.

    Second, Flash SSDs can usually saturate the SATA bus when it comes to sequential reads. For random writes, they're an order of magnitude or so better than spinning rust, but they're a long way away from bus speed. If you're doing anything other than running contrived benchmarks, there's a lot of spare speed in SATA for a faster SSD.

    --
    sudo mod me up
    • (Score: 2) by theluggage on Friday July 31 2015, @02:12PM

      by theluggage (1797) on Friday July 31 2015, @02:12PM (#216313)

      Exactly. People seem to have missed that this is INTEL. You may have heard of them - they design and make rather popular processors and the key chipsets for rather popular motherboards. Hmm... how are they going to solve the problem of current CPUs, motherboards & chipsets not supporting XPOINT memory?

      Wouldn't it be terrible for Intel if everybody needed to buy new CPUs and motherboards to get an XPOINT intermediate storage slot? They'd be especially gutted if XPOINT wasn't available on AMD or ARM...

  • (Score: 2) by takyon on Friday July 31 2015, @03:53PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday July 31 2015, @03:53PM (#216354) Journal

    http://m.theregister.co.uk/2015/07/30/xpoint_cuckoo_invades_memory_storage_hierarchy_nest/ [theregister.co.uk]

    FlashDIMMS are shown in the same region as XPoint. Until we know actual XPoint speeds, we can't be any more precise than that. SAS and SATA SSDs are pushing into the 10,000rpm disk drive space, with 3D TLC (3bits/cell) NAND being at the forefront of this.

    XPoint memory is up to 1,000 times faster than NAND, but it can barely reach 20 per cent of the speeds DRAM is capable of. This may mean that a SATA-connected XPoint SSD will have a large proportion of its access latency taken up by the SATA interface, and also that current state-of-the-art 12Gbit/s SATA restricts its bandwidth.

    It will not cost as much as DRAM but will cost more than NAND. Actual prices don't exist yet and we have not seen relative cost multiples, such as XPoint will be five times more costly than NAND, but half the price of DRAM.

    DIMMs would seem to be a good place for XPoint. Companies like Diablo are putting NAND in DIMMs.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]