Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Thursday September 01 2016, @10:49AM   Printer-friendly
from the good-fast-cheap-—-pick-two dept.

A nanotube-based non-volatile RAM product could give Intel/Micron's 3D XPoint some competition:

Fujitsu announced that it has licensed Nantero's carbon nanotube-based NRAM (Non-volatile RAM) and will participate in a joint development effort to bring a 256Gb 55nm product to market in 2018. Carbon nanotubes are a promising technology projected to make an appearance in numerous applications, largely due to their incredible characteristics, which include unmatchable performance, durability and extreme temperature tolerance. Most view carbon nanotubes as a technology far off on the horizon, but Nantero has had working prototypes for several years.

[...] Other products also suffer limited endurance thresholds, whereas Nantero's NRAM has been tested up to 10^12 (1 trillion) cycles. The company stopped testing endurance at that point, so the upper bounds remain undefined. [...] The NRAM carbon nanotubes are 2nm in diameter. Much like NAND, fabs arrange the material into separate cells. NAND employs electrons to denote the binary value held in each cell (1 or 0), and the smallest lithographies hold roughly a dozen electrons per cell. NRAM employs several hundred carbon nanotubes per cell, and the tubes either attract or repel each other with the application of an electrical current, which signifies an "on" or "off" state. NRAM erases (resets) the cells with a phonon-driven technique that forces the nanotubes to vibrate and separate from each other. NRAM triggers the reset process by reversing the current, and it is reportedly more power efficient than competing memories (particularly at idle, where it requires no power at all).

NRAM could be much faster than 3D XPoint and suitable as universal memory for a concept like HP's "The Machine":

NRAM seems to be far faster than XPoint, and could be denser. An Intel Optane DIMM might have a latency of [7-9 µs] (7,000-9,000ns). Micron QuantX XPoint SSDs are expected to have latencies of [10 µs] for reading and [20 µs] for writing; that's 10,000 and 20,000ns respectively. A quick comparison has NRAM at c50ns or less and XPoint DIMMs at 7,000-10,000ns, 140-200 times slower. We might imagine that an XPoint/ReRAM-using server system has both DRAM and XPoint/ReRAM whereas an NRAM-using system might just use NRAM, once pricing facilitates this.

Another company licensing with Nantero is already looking to scale the NRAM down to 28nm.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by TheRaven on Thursday September 01 2016, @03:32PM

    by TheRaven (270) on Thursday September 01 2016, @03:32PM (#396205) Journal

    One cycle is a rewrite cycle. If it takes 1µs to toggle a bit, then you'd need 16.5 weeks of constantly toggling the same bit to wear it out. Even stack memory (which would be totally insane to put in NVRAM) isn't written at the full interface bandwidth though.

    At maximum performance, DDR4 (currently the state of the art DRAM interface standard) allows 19.2GB/s transfers. If you have 32GiB of RAM in your server per DDR4 channel (slightly on the low end, but not too far off), then if your writes all average out to a uniform distribution (best possible case), then it will take 1.79 seconds to toggle every bit once. That works out to 56,750 years to manage to write all of it. If you manage 0.01% of this, then your server is lasting pretty well. 0.01% of ideal wear levelling is pretty easily managed by providing write counters (even something very coarse to flag pages near their limits) to the OS and letting the virtual memory subsystem handle remapping. Even at superpage granularity, you should be able to do that (modern VM subsystems already move data around between physical pages to coalesce pages for superpage promotion). When a particular page gets close to having bits that have been flipped a trillion times, then you'll likely just mark it as CoW and prevent further writes.

    --
    sudo mod me up
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Insightful) by jmorris on Thursday September 01 2016, @07:11PM

    by jmorris (4844) on Thursday September 01 2016, @07:11PM (#396319)

    All of which misses the point. If it has a write limit it isn't RAM. If you have to do wear leveling and error detection and correction is routine instead of a very rare exception you can't use it as ram. You won't feed a modern CPU's cache if you you have a complex controller between the chips and the CPU that has to do that sort of work. Basically we have three levels now. RAM has access times in nanoseconds, Flash/SSD is microseconds and spinning media in milliseconds.

    • (Score: 0) by Anonymous Coward on Friday September 02 2016, @02:21AM

      by Anonymous Coward on Friday September 02 2016, @02:21AM (#396518)

      The company stopped testing endurance at that point, so the upper bounds remain undefined

      They dont know the upper limit.

    • (Score: 3, Informative) by TheRaven on Friday September 02 2016, @08:30AM

      by TheRaven (270) on Friday September 02 2016, @08:30AM (#396599) Journal

      If it has a write limit it isn't RAM.

      So DRAM isn't RAM? The capacitors degrade over time while holding charge. They just degrade sufficiently slowly that most of the time you'll throw the RAM away long before it dies. It sounds like NRAM is also in this category for a lot of usage models.

      You won't feed a modern CPU's cache if you you have a complex controller between the chips and the CPU that has to do that sort of work

      You already have a complex controller between the chips and the CPU, it's called an MMU. On everything except MIPS these days it has a TLB (actually, typically multiple levels of TLB) and a hardware page-table walker that will fill the TLB. The only thing required to be able to do wear levelling at page granularity would be a mechanism for the OS to query the RAM. You probably wouldn't do this with write counters, you'd do it by triggering ECC recoverable error reports when the value of a 1 or 0 is sufficiently close to the threshold that it probably isn't going to last much longer. All that the OS needs to do is then move the data to a different physical page and update the page table. That's something that some server operating systems do already for ECC memory (don't trust lines that keep reporting recoverable errors).

      With a trillion writes before it starts to look unreliable, we're not talking about something that's common, we're talking about having to be a bit more careful with how you handle ECC failures because now the data in your RAM is likely to be important persistent data, not just temporary results that can be discarded on reboot.

      --
      sudo mod me up