Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday July 29 2015, @05:21PM   Printer-friendly [Skip to comment(s)]
from the smaller-yet-bigger dept.

Intel and Micron have announced a new type of non-volatile memory called "3D XPoint", which they say is 1,000 times faster (in terms of latency) than the NAND flash used in solid-state disks, with 1,000 times the endurance. It also has 10 times the density of DRAM. It is a stackable, 20nm, technology, and is expected to be sold next year in a 128 Gb (16 GB) size:

If all goes to plan, the first products to feature 3D XPoint (pronounced cross-point) will go on sale next year. Its price has yet to be announced. Intel is marketing it as the first new class of "mainstream memory" since 1989. Rather than pitch it as a replacement for either flash storage or Ram (random access memory), the company suggests it will be used alongside them to hold certain data "closer" to a processor so that it can be accessed more quickly than before.

[...] 3D XPoint does away with the need to use the transistors at the heart of Nand chips... By contrast, 3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero. The advantage is that each memory cell can be addressed individually, radically speeding things up. An added benefit is that it should last hundreds of times longer than Nand before becoming unreliable.

It is expected to be more expensive than NAND, cheaper than DRAM, and slower than DRAM. If a 16 GB chip is the minimum XPoint offering, it could be used to store an operating system and certain applications for a substantial speedup compared to SSD storage.

This seems likely to beat similar fast and non-volatile "NAND-killers" to market, such as memristors and Crossbar RRAM. Intel and Micron have worked on phase-change memory (PCM) previously, but Intel has denied that XPoint is a PCM, memristor, or spin-transfer torque based technology. The Platform speculates that the next-generation 100+ petaflops supercomputers will utilize XPoint, along with other applications facing memory bottlenecks such as genomics analysis and gaming. The 16 GB chip is a simple 2-layer stack, compared to 32 layers for Samsung's available V-NAND SSDs, so there is enormous potential for capacity growth.

The technology will be sampling later this year to potential customers. Both Micron and Intel will develop their own 3D XPoint products, and will not be licensing the technology.


Original Submission

Related Stories

Intel-Micron's 3D XPoint Memory Lacks Key Details 15 comments

The 3D design uses a transistor-less cross point architecture to create a 3D design of interconnects, where memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually. This means data can be read or written to and from the actual cells containing data and not the whole chip containing relevant cells.

Beyond that, though, we don't know much about the memory, like exactly what kind of memory it is. Is it phase change memory, ReRAM, MRAM or some other kind of memory? The two won't say. The biggest unanswered question in my mind is the bus for this new memory, which is supposed to start coming to market next year. The SATA III bus used by virtually all motherboards is already considered saturated. PCI Express is a faster alternative assuming you have the lanes for the data.

Making memory 1000 times faster isn't very useful if it chokes on the I/O bus, which is exactly what will happen if they use existing technology. It would be like a one-lane highway with no speed limit.

It needs a new use model. It can't be positioned as a hard drive alternative because the interfaces will choke it. So the question becomes what do they do? Clearly they need to come up with their own bus. Jim Handy, an analyst who follows the memory space, thinks it will be an SRAM interface. SRAM is used in CPU caches. This would mean the 3D XPoint memory would talk directly to the CPU.

"The beauty of an SRAM interface is that its really, really fast. What's not nice is it has a high pin count," he told me.

He also likes the implementation from Diablo Technologies, which basically built SSD drives in the shape of DDR3 memory sticks that plug into your motherboard memory slots. This lets the drives talk to the CPU at the speed of memory and not a hard drive.

One thing is for sure, the bus will be what makes or breaks 3D XPoint, because what good is a fast read if it chokes on the I/O interface?


Original Submission

Western Digital Acquires SanDisk, MyPassport 256-bit AES Encryption "Useless" 9 comments

Update: Western Digital announced its acquisition of SanDisk on Wednesday for $86.50 per share, or about $19 billion.

Bloomberg reports that hard disk drive maker Western Digital (WD) is considering purchasing SanDisk Corp. for between $80 and $90 a share, or around $17-18 billion.

A merger would give WD access to SanDisk's NAND flash chip foundry deal with Toshiba and make WD an instant competitor in the solid-state drive market. As we reported last week, SanDisk is also partnering with Hewlett-Packard on Storage-Class Memory (SCM), a post-NAND competitor to Intel and Micron's 3D XPoint offering.

After three years of delay, Chinese trade regulator MOFCOM has approved WD's integration with HGST. The two businesses will be required to keep product brands and sales teams separate for two more years, but can begin "combining operations and sharing technology," such as HGST's helium-filled 7-platter hard drives. $400 million in annual operating expenses could be reduced by the integration.

WD can be expected to include helium-filled hard drives in its product lineup imminently. If WD merges with SanDisk, we may also see the inclusion of more large NAND flash caches in the form of hybrid hard drive (HHD/SSHD) products. The Xbox One Elite Bundle ships with a 1 terabyte SSHD, and Seagate recently released a 4 terabyte desktop SSHD.

It's not all good news for Western Digital this week. Security researchers have just disclosed multiple vulnerabilities in WD's "My Passport" and "My Book" self-encrypting hard drives that allow encryption to be bypassed.

Intel 3D XPoint... Still Coming Soon 8 comments

Some of Lenovo's new laptops will ship with Intel's 3D XPoint ("Optane"-branded) SSDs, an alternative to NAND flash and RAM. However, they may not arrive by Q1 2017 and the capacities are still small:

Lenovo's announcement today of a new generation of ThinkPads based on Intel's Kaby Lake platform includes brief but tantalizing mention of Optane, Intel's brand for devices using the 3D XPoint non-volatile memory technology they co-developed with Micron. Lenovo's new ThinkPads and competing high-end Kaby Lake systems will likely be the first appearance of 3D XPoint memory in the consumer PC market.

Several of Lenovo's newly announced ThinkPads will offer 16GB Optane SSDs in M.2 2242 form factor paired with hard drives as an alternative to a using a single NVMe SSD with NAND flash memory (usually TLC NAND, with a portion used as SLC cache). The new Intel Optane devices mentioned by Lenovo are most likely the codenamed Stony Beach NVMe PCIe 3 x2 drives that were featured in roadmap leaked back in July. More recent leaks have indicated that these will be branded as the Intel Optane Memory 8000p series, with a 32GB capacity in addition to the 16GB Lenovo will be using. Since Intel's 3D XPoint memory is being manufactured as a two-layer 128Gb (16GB) die, these Optane products will require just one or two dies and will have no trouble fitting on to a short M.2 2242 card alongside a controller chip.

The new generation of ThinkPads will be hitting the market in January and February 2017, but Lenovo and Intel haven't indicated when the configurations with Optane will be available. Other sources in the industry are telling us that Optane is still suffering from delays, so while we hope to see a working demo at CES, the Optane-equipped notebooks may not actually launch until much later in the year. We also expect the bulk of the initial supply of 3D XPoint memory to go to the enterprise market, just like virtually all of Intel and Micron's 3D MLC NAND output has been used for enterprise SSDs so far.

Phoenix666 points out:

When it ships in March, the T570 will be ready to run Intel's Optane, a new class of memory and storage that promises to be significantly faster than today's SSDs and DRAM.

The T570 is the first laptop announced with support for Optane. Intel has not said when it will ship Optane memory, but the T570 has the hooks to support the technology.

Previously: Intel and Micron Announce 3D XPoint, A New Type of Memory and Storage
False News: Intel Announces "Optane"-Brand 3D XPoint SSDs and DIMMs for 2016


Original Submission

Intel Announces "Optane"-Brand 3D XPoint SSDs and DIMMs for 2016 15 comments

Were you concerned that Intel and Micron's new and totally-not-phase-change-memory technology would become vaporware? At the Intel Developer Forum 2015, Intel announced that 3D XPoint based products will be available in 2016 under a new brand name: Optane.

The Optane products will be available in 2016, in both standard SSD (PCIe) form factors for everything from Ultrabooks to servers, and in a DIMM form factor for Xeon systems for even greater bandwidth and lower latencies. As expected, Intel will be providing storage controllers optimized for the 3D XPoint memory, though no further details on that subject matter were provided. This announcement is in-line with Intel and Micron's original 3D XPoint announcement last month, which also announced that 3D XPoint would be out in 2016.

Finally, as part of the Optane announcement, Intel also gave the world's first live 3D XPoint demonstration. In a system with an Optane PCIe SSD, Intel ran a quick set of live IOps benchmarks comparing the Optane SSD to their high-end P3700 SSD. The Optane SSD offered better than 5x the IOps of the P3700 SSD, with that lead growing to more than 7x at a queue depth of 1, a client-like workload where massive arrays of NAND like the P3700 traditionally struggle to achieve maximum performance.


Original Submission

Intel Shows Off a 3D XPoint Backup Device 1 comment

Post-NAND memory/storage technologies that are almost as fast as DRAM (in terms of latency) but denser and cheaper will be arriving in the coming years. One such technology is Intel's 3D XPoint (also branded as "Optane"). Intel has demonstrated the performance of an Optane device at its IDF 2016 keynote in Shenzhen, China:

In the test, Intel used two computer systems. The first system utilized two Intel SATA SSDs to transfer a movie from the host machine to a Thunderbolt 3-connected device using another Intel SATA SSD. The transfer performance clearly shows a TLC-based SATA device--most likely the company's new SSD 540 or 5400 Series business-class drive. The second computer transferred the same movie file over Thunderbolt 3, but this time the host and destination media were based on Optane memory technology.

On the surface, the 2 GB/s transfer was impressive. The performance was consistent and didn't take much time. Upon closer inspection, though, this was the worst possible demonstration of Optane technology Intel could have shown. The company's own SSD 750 Series could have produced similar results.

The demonstration the world is waiting on involves random performance; it's the one area Optane changes storage for consumers. We won't see Optane technology in a data backup device for a decade or more, but a small amount of Optane to cache TLC NAND will go a long way in improving the user experience. If Intel doesn't arrange a public demonstration, we will have to wait until the second half of 2017 when we get our own Optane devices to run the tests ourselves.

At IDF 2015 in San Francisco Intel displayed a static image of Optane reading random data at 76,000 IOPS using queue depth 1. That is a full 7x improvement over the company's current NVMe-based consumer SSDs.

In other news, Everspin has begun to ship samples of 256 Mb Magnetoresistive random-access memory (MRAM). Crossbar tried to remind everyone that it still exists. The last time we saw HP/HPE, it had seemingly abandoned "memristors" to work on the generically-named Storage-Class Memory with SanDisk. Intel could become the first to bring post-NAND memory to the consumer market with XPoint devices with capacities of at least 16-32 GB priced at $2-4/GB, enough to store an operating system and some applications.


Original Submission

SanDisk and HP Announce Potential Competitor to XPoint Memory 5 comments

HP and SanDisk have announced the development of Storage-Class Memory, a technology with attributes similar to Intel and Micron's 3D XPoint ("crosspoint") memory:

HP and SanDisk are joining forces to combat the Intel/Micron 3D XPoint memory threat, and developing their own Storage-Class Memory (SCM) technology.

SCM is persistent memory that runs at DRAM or near-DRAM speed but is less costly, enabling in-memory computing without any overhead of writing to slower persistent data storage such as flash or disk through a CPU cycle-gobbling IO stack. It requires both hardware and software developments. Micron and Intel's XPoint memory is claimed to be 1,000 times faster than flash with up to 1,000 times flash's endurance. Oddly enough HP and SanDisk say their SCM technology is also "expected to be up to 1,000 times faster than flash storage and offer up to 1,000 times more endurance than flash storage."

[...] The partnership's aim is to create enterprise-class products for Memory-driven Computing and also to build better data centre SSDs. The Storage-Class Memory deal is more long-term: "Our partnership to collaborate on new SCM technology solutions is expected to revolutionise computing in the years ahead."

[...] It's not yet known what the XPoint cell process is, beyond being told it's a bulk change to the material but not a phase-change. Analyst Jim Handy has written an XPoint report which said HP had abandoned its Memristor technology. This SanDisk partnership implies that this point is incorrect.

The HP/SanDisk duo also intend to contribute to HP's Machine concept, "which reinvents the fundamental architecture of computers to enable a quantum leap in performance and efficiency, while lowering costs and improving security."

As we previously reported, Intel and Micron plan to release SSD and DIMM XPoint-based products in 2016, with Intel marketing them under the brand name "Optane".

Is HP's memristor partnership with Hynix obsolete? Will HP Enterprise finally give birth to "The Machine" and change supercomputing? Will Crossbar's ReRAM wither and die, or will the company join the fray and compete to produce the ultimate post-NAND memory?


Original Submission

Micron Abandons 3D XPoint, Puts Fab Up for Sale

Micron Abandons 3D XPoint Memory Technology

In a sudden but perhaps not too surprising announcement, Micron has stated that they are ceasing all R&D of 3D XPoint memory technology. Intel and Micron co-developed 3D XPoint memory, revealed in 2015 as a non-volatile memory technology with higher performance and endurance than NAND flash memory.

Intel has been responsible for almost all of the commercial volume of 3D XPoint-based products, under their Optane brand for both NVMe SSDs and persistent memory modules in the DIMM form factor. Micron in 2016 announced their QuantX brand for 3D XPoint products, but never shipped anything under that brand. Their first and only real product based on 3D XPoint was the X100 high-end enterprise SSD which saw very limited release to close partners. Micron has now decided that further work to commercialize 3D XPoint memory isn't worth the investment.

[...] Micron is now putting that 3D XPoint fab up for sale, and is currently engaged in discussions with several potential buyers. Intel is the most obvious potential buyer, having recently begun the long process of selling their NAND flash and flash-based SSD business to SK hynix while keeping their Optane products. Intel has already moved their 3D XPoint R&D to Rio Rancho, NM but has not built up any 3D XPoint mass production capacity of their own; buying the Lehi, UT fab would save them the trouble of equipping eg. their NAND fab in Dalian, China to also manufacture 3D XPoint.

Micron exercised its contract right to buy out the Utah fab in 2019, Intel paid Micron to manufacture 3D XPoint memory (likely with a price hike in 2020), and now Intel may be buying back the entire fab.

See also: Micron's 3D XPoint departure is not good news for Intel Optane
3D XPoint Memory At The Crossroads

Also at Tom's Hardware.

Previously: Intel and Micron Announce 3D XPoint, A New Type of Memory and Storage
Micron: 96-Layer 3D NAND Coming, 3D XPoint Sales Disappoint
Micron Buys Out Intel's Stake in 3D XPoint Joint Venture
Micron Follows Through, Buys Out Intel's Stake in NAND and 3D XPoint Joint Venture
Intel and Micron Sign a New 3D XPoint Agreement


Original Submission

TrendForce: 56% of PCs Sold Will Come With SSDs by 2018 16 comments

CNET reports:

They've been a fixture of the computing industry for 60 years, but in 2018, hard drives will be pushed aside by storage systems using memory chips in PCs, an analyst firm predicts. [...] SSDs no longer are exotic. This year, 33 percent of PCs sold will come with SSDs, but that should grow to 56 percent in 2018, analyst firm TrendForce forecast Monday.

They predicted 44% adoption in 2017. SSD prices are expected to drop to $0.17/GB in 2017, a direct result of new generations of 3D/vertical NAND.

As for those 3D XPoint post-NAND devices coming from Intel and Micron, the initial capacities could be closer to 140 GB than the 16-32 GB I originally expected.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by VLM on Wednesday July 29 2015, @05:36PM

    by VLM (445) Subscriber Badge on Wednesday July 29 2015, @05:36PM (#215560)

    Wat?

    Intel has denied that XPoint is a ... memristor

    By contrast, 3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero.

    OK, whatever. I'm about to issue a competitor called VLM-emory(tm) that stores ones-n-zeros by changing the electrostatic field between two microscopic parallel plates, but I assure you VLM-emory(tm) isn't a mere dynamic ram cell its totally something new.

    My guess is some patent troll has a patent on "using memristors.... on the internet!" and this is their way of avoiding it.

    Something interesting to think about is if all the specs are true this will rock for stuff like replacing microcontroller static ram, unless by "lasts hundreds of times longer than NAND flash" they mean in the real world "like 10 or so write cycles, probably, and just like NiCad cells we'll blame the end user when they break" Eventually its going to be cheaper to ship billions of bytes of write-mostly-once flash and a register pointer than to ship sram and its backup batteries in a microcontroller.

    You want to think of something weird / mind warping, what happens when immutable/functional programming paradigm hits microcontrollers? Wanna stop wasting battery life changing memory values, well, stop changing them!

    • (Score: 1, Informative) by Anonymous Coward on Wednesday July 29 2015, @06:12PM

      by Anonymous Coward on Wednesday July 29 2015, @06:12PM (#215567)

      Having the resistance of something represent 1/0 doesn't necessarily mean it's a memristor.

      The definition of a memristor is a bit narrower than that despite the patent trolls trying to get it to cover more.
      https://en.wikipedia.org/wiki/Memristor [wikipedia.org]

      And even if it is a memristor I think it's ridiculous if someone could patent and monopolize all memristors. That's as ridiculous as allowing people to patent and monopolize all capacitors.

      • (Score: 2) by maxwell demon on Wednesday July 29 2015, @06:25PM

        by maxwell demon (1608) on Wednesday July 29 2015, @06:25PM (#215572) Journal

        I can assure you, had the capacitor been invented these days, there would be a patent covering all of them. Probably broad enough to also cover rechargeable batteries, if those had not been invented before either.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by Francis on Wednesday July 29 2015, @06:52PM

          by Francis (5544) on Wednesday July 29 2015, @06:52PM (#215578)

          Which is why the funding for the USPTO needs to be increased and provided by the federal government. With application fees being nominal and a large tax on sales of patents.

          • (Score: 1, Interesting) by Anonymous Coward on Wednesday July 29 2015, @08:03PM

            by Anonymous Coward on Wednesday July 29 2015, @08:03PM (#215594)

            Or reduced to zero and the department closed.

            • (Score: 3, Insightful) by Francis on Wednesday July 29 2015, @08:42PM

              by Francis (5544) on Wednesday July 29 2015, @08:42PM (#215601)

              Which worked fine when most inventions required relatively little in the way of resources to produce. These days, the inventions we really need are the ones that require millions of dollars in funding to properly test.

              Take away the USPTO entirely and those innovations won't be coming out of the US at all. I'm all in favor of reform, but burning the building down because the bathroom is dirty doesn't seem terribly practical.

              • (Score: 0) by Anonymous Coward on Thursday July 30 2015, @11:05AM

                by Anonymous Coward on Thursday July 30 2015, @11:05AM (#215829)
                Yeah without all that, Leon Chua wouldn't have thought of the memristor at a major public research university located in Berkeley, California.

                I'm sure the reason people are willing to pay a premium for Apple stuff instead of Korean or Chinese knockoffs is because of all those patents.
                • (Score: 2) by Francis on Thursday July 30 2015, @03:47PM

                  by Francis (5544) on Thursday July 30 2015, @03:47PM (#215911)

                  And where, pray tell, did the funding for his research come from? The university doesn't have huge amounts of money to pay for research, a lot of that money comes from private groups that use royalties to fund the research. No company is going to fund research if the competition can steal the work without consequences.

    • (Score: 2) by captain normal on Wednesday July 29 2015, @06:15PM

      by captain normal (2205) on Wednesday July 29 2015, @06:15PM (#215569)

      Yep, sure looks like dancing around patents to me also. Plus the whole design looks like crossbar switching scheme to me (https://en.wikipedia.org/wiki/Crossbar_switch). Maybe trying to avoid the ancient Bell Labs patents.

      And---"...a 128 Gb (16 GB) size." ? Which is it? From the FA it seems they working with 16 and 32 GB capacity.

      • (Score: 3, Informative) by gman003 on Wednesday July 29 2015, @06:24PM

        by gman003 (4155) on Wednesday July 29 2015, @06:24PM (#215571)

        128 Gb (gigabits) = 16 GB (gigabytes)

    • (Score: 0) by Anonymous Coward on Wednesday July 29 2015, @06:56PM

      by Anonymous Coward on Wednesday July 29 2015, @06:56PM (#215579)

      and this is their way of avoiding it.

      I suspect this is tied to the Chinese company 'we want to buy micron'. Micron is coming back with 'uh double that'.

  • (Score: 2) by Runaway1956 on Wednesday July 29 2015, @07:07PM

    by Runaway1956 (2926) Subscriber Badge on Wednesday July 29 2015, @07:07PM (#215581) Homepage Journal

    my_entire_operating_system lives on a 30 gig ext4 partition. Only 5.91 gig of that partition is used. 16 gig of memory is a hell of a lot of memory, if you're not running a server. You can create a two gig ram drive for temp and cache, load your most memory intensive game, and I'm pretty certain you can STILL run a couple of virtual machines in the background. Probably still have plenty of spare memory and CPU cycles for distributed computing projects, too. Seriously, the only thing that holds me back on 4 gigs of memory, is the ability to load virtual machines with meaningful memory allocations. (I sure as blazes do NOT want a machine or VM resorting to virtual memory!)

    --
    Let's go Brandon!
    • (Score: 0) by Anonymous Coward on Wednesday July 29 2015, @07:24PM

      by Anonymous Coward on Wednesday July 29 2015, @07:24PM (#215586)

      yeah. well I do scientific computing. I need RAM.

    • (Score: 3, Interesting) by Aichon on Wednesday July 29 2015, @09:44PM

      by Aichon (5059) on Wednesday July 29 2015, @09:44PM (#215621)

      Yup, 16GB is overkill for many users. And for others, it can be orders of magnitude too small.

      For instance, when I was doing my grad research in 2008, we had a machine with 16GB of RAM, which was quite a bit less than I wanted, given that I was analyzing a 7.8TB data set (representing a web crawl of 6.3B pages, which was at the time the largest in academia) that had been stored as a single file in an undocumented, proprietary format created by an earlier grad student. Oh, and that format couldn't be randomly accessed, meaning that if I wanted to get at a piece of data that was at the end of the 7.8TB file, I'd first have to read in and parse the preceding 7.8TB in order to know which bit the data I was interested in started at. And there wasn't enough space in the RAID to break the file up and store it in chunks.

      It got really bad when they asked me to figure out for each domain which other domains linked to the ones that linked to them (i.e. for a given domain, find the "supporter" domains that linked to the "neighbor" domains that linked to the given one), since the easiest algorithms for that problem assume that you can store the entire data set in memory. Instead, I think we ended up having to design an algorithm that iterated across the entire 7.8TB data set about 500 times (i.e. 7.8TB / 16GB), with each full read of the data taking roughly 4 hours. All of which was quite doable, of course, but it would have been GREATLY simplified if I could have had quite a bit more RAM.

      These days, the only time I tend to have issues on my 8GB machine at home is when doing photo or video editing for fun, which makes sense, since those files can be massive and the professionals in those fields are known for chewing through RAM like nothing else.

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday July 29 2015, @10:21PM

        by Anonymous Coward on Wednesday July 29 2015, @10:21PM (#215635)

        A set of memory mapped files and an index probably would have help you out a decent amount.

        • (Score: 2) by Aichon on Thursday July 30 2015, @02:52PM

          by Aichon (5059) on Thursday July 30 2015, @02:52PM (#215898)

          Agreed, but I was young(er) and didn't know any better. ;)