Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday September 01 2016, @10:49AM   Printer-friendly
from the good-fast-cheap-—-pick-two dept.

A nanotube-based non-volatile RAM product could give Intel/Micron's 3D XPoint some competition:

Fujitsu announced that it has licensed Nantero's carbon nanotube-based NRAM (Non-volatile RAM) and will participate in a joint development effort to bring a 256Gb 55nm product to market in 2018. Carbon nanotubes are a promising technology projected to make an appearance in numerous applications, largely due to their incredible characteristics, which include unmatchable performance, durability and extreme temperature tolerance. Most view carbon nanotubes as a technology far off on the horizon, but Nantero has had working prototypes for several years.

[...] Other products also suffer limited endurance thresholds, whereas Nantero's NRAM has been tested up to 10^12 (1 trillion) cycles. The company stopped testing endurance at that point, so the upper bounds remain undefined. [...] The NRAM carbon nanotubes are 2nm in diameter. Much like NAND, fabs arrange the material into separate cells. NAND employs electrons to denote the binary value held in each cell (1 or 0), and the smallest lithographies hold roughly a dozen electrons per cell. NRAM employs several hundred carbon nanotubes per cell, and the tubes either attract or repel each other with the application of an electrical current, which signifies an "on" or "off" state. NRAM erases (resets) the cells with a phonon-driven technique that forces the nanotubes to vibrate and separate from each other. NRAM triggers the reset process by reversing the current, and it is reportedly more power efficient than competing memories (particularly at idle, where it requires no power at all).

NRAM could be much faster than 3D XPoint and suitable as universal memory for a concept like HP's "The Machine":

NRAM seems to be far faster than XPoint, and could be denser. An Intel Optane DIMM might have a latency of [7-9 µs] (7,000-9,000ns). Micron QuantX XPoint SSDs are expected to have latencies of [10 µs] for reading and [20 µs] for writing; that's 10,000 and 20,000ns respectively. A quick comparison has NRAM at c50ns or less and XPoint DIMMs at 7,000-10,000ns, 140-200 times slower. We might imagine that an XPoint/ReRAM-using server system has both DRAM and XPoint/ReRAM whereas an NRAM-using system might just use NRAM, once pricing facilitates this.

Another company licensing with Nantero is already looking to scale the NRAM down to 28nm.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by TheRaven on Friday September 02 2016, @08:30AM

    by TheRaven (270) on Friday September 02 2016, @08:30AM (#396599) Journal

    If it has a write limit it isn't RAM.

    So DRAM isn't RAM? The capacitors degrade over time while holding charge. They just degrade sufficiently slowly that most of the time you'll throw the RAM away long before it dies. It sounds like NRAM is also in this category for a lot of usage models.

    You won't feed a modern CPU's cache if you you have a complex controller between the chips and the CPU that has to do that sort of work

    You already have a complex controller between the chips and the CPU, it's called an MMU. On everything except MIPS these days it has a TLB (actually, typically multiple levels of TLB) and a hardware page-table walker that will fill the TLB. The only thing required to be able to do wear levelling at page granularity would be a mechanism for the OS to query the RAM. You probably wouldn't do this with write counters, you'd do it by triggering ECC recoverable error reports when the value of a 1 or 0 is sufficiently close to the threshold that it probably isn't going to last much longer. All that the OS needs to do is then move the data to a different physical page and update the page table. That's something that some server operating systems do already for ECC memory (don't trust lines that keep reporting recoverable errors).

    With a trillion writes before it starts to look unreliable, we're not talking about something that's common, we're talking about having to be a bit more careful with how you handle ECC failures because now the data in your RAM is likely to be important persistent data, not just temporary results that can be discarded on reboot.

    --
    sudo mod me up
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3