Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by martyb on Monday January 10, @11:02PM   Printer-friendly
from the computers-without-Alzheimers-department dept.

Mass production of revolutionary computer memory moves closer with ULTRARAM™ on silicon wafers for the first time

ULTRARAM™ is a novel type of memory with extraordinary properties. It combines the non-volatility of a data storage memory, like flash, with the speed, energy-efficiency and endurance of a working memory, like DRAM. To do this it utilises the unique properties of compound semiconductors, commonly used in photonic devices such as LEDS, laser diodes and infrared detectors, but not in digital electronics, which is the preserve of silicon.

[...] Now, in a collaboration between the Physics and Engineering Departments at Lancaster University and the Department of Physics at Warwick, ULTRARAM™ has been implemented on silicon wafers for the very first time.

Professor Manus Hayne of the Department of Physics at Lancaster, who leads the work said, "ULTRARAM™ on silicon is a huge advance for our research, overcoming very significant materials challenges of large crystalline lattice mismatch, the change from elemental to compound semiconductor and differences in thermal contraction."

[...] Remarkably, the ULTRARAM™ on silicon devices actually outperform previous incarnations of the technology on GaAs compound semiconductor wafers, demonstrating (extrapolated) data storage times of at least 1000 years, fast switching speed (for device size) and program-erase cycling endurance of at least 10 million, which is one hundred to one thousand times better than flash.

So... are we approaching the point where we get a plug-in RAM storage module that can be used like nonvolatile RAM -- because it is nonvolatile? And when you've built complex data structures on it with RAM efficiency, you can unplug it and put it, and of course the data, on a shelf for later use?

Or just plug it into a computer when you need an extra 24 gigabytes of RAM to formally verify a category-theoretical theorem?

How would *you* like to use this?

Journal Reference:
Peter D. Hodgson, Dominic Lane, Peter J. Carrington, et al. ULTRARAM: A Low‐Energy, High‐Endurance, Compound‐Semiconductor Memory on Silicon [open], Advanced Electronic Materials (DOI: 10.1002/aelm.202101103)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Tuesday January 11, @03:15PM (1 child)

    by DannyB (5839) Subscriber Badge on Tuesday January 11, @03:15PM (#1211777) Journal

    I've wondered for a couple decades how OS design might change when something like this eventually becomes real.

    The fact that we have separate volatile working memory and file storage is simply an artifact of the realities of past hardware. Because of that, the whole idea of the files (storage) and memory being separate is very deeply ingrained into our thinking.

    What if the processor could execute out of any page of "storage" (eg, memory)? Think of it as if the processor could access any part of storage (eg memory). There would be no more "boot" process to "load" an OS. The system could simply immediately begin execution at the OS start point and rapidly initialize. That is for a cold start. For warm start, the system could simply go into a "sleep" mode, and then wake up later. All running processes still intact.

    Imagine the architecture of how an executable might run. We typically think of memory as having, from low address zero, the program code, read only initialized constants / variables, then read/write variables, then a heap, and from the other end of address space, a stack. Now what if, by having a vast address space, one could easily have multiple stacks, heaps, or maybe things we cannot easily think of due to our ingrained historical thinking. There would be no virtual memory. Programs could simply malloc storage. There would need to be some way to limit and manage this. And one possibility is GC which has definitely already come of age. Just rambling about various ideas here.

    My point is that OS design might be able to radically change due to a development like this.

    --
    Nature abhors a machine that removes dust from the living space.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 11, @04:09PM

    by Anonymous Coward on Tuesday January 11, @04:09PM (#1211793)

    Just because it flows naturally from the attributes of the hardware doesn't mean it's a bad idea.

    Suppose something crashes. Now you can't just restart it, you have to reinstall.

    Most of what computers do is ephemeral. Hardware devices are used by one program, then another. No program can assume it will be in the same state it was in before. Network connections expire if not used, or need to be reconnected when the device changes networks. Data gets stale and must be updated.

    You couldn't dual boot operating systems.

    Most of the things you cite as advantages are already available. If you want multiple stacks, that's what threads are. If you want multiple heaps, you can do that, although the main use for that is for generational garbage collectors, which are also a thing. Presenting files as memory is already done by mmap().

    The answer to "it would be nice to not have to use virtual memory" is not actually "non-volatile memory" but rather "more RAM."

    There are really just no advantages to this. Working memory and storage should be separate.