from the to-surf-the-web dept.
Samsung has announced the mass production of 16 Gb GDDR6 SDRAM chips with a higher-than-expected pin speed. The chips could see use in upcoming graphics cards that are not equipped with High Bandwidth Memory:
Samsung has beaten SK Hynix and Micron to be the first to mass produce GDDR6 memory chips. Samsung's 16Gb (2GB) chips are fabricated on a 10nm process and run at 1.35V. The new chips have a whopping 18Gb/s pin speed and will be able to reach a transfer rate of 72GB/s. Samsung's current 8Gb (1GB) GDDR5 memory chips, besides having half the density, work at 1.55V with up to 9Gb/s pin speeds. In a pre-CES 2018 press release, Samsung briefly mentioned the impending release of these chips. However, the speed on release is significantly faster than the earlier stated 16Gb/s pin speed and 64GB/s transfer rate.
18 Gbps exceeds what the JEDEC standard calls for.
The new technology is designed to improve bandwidth available to high-performance graphics processing units without fundamentally changing the memory architecture of graphics cards or memory technology itself, similar to other generations of GDDR, although these new specifications are arguably pushing the phyiscal[sic] limits of the technology and hardware in its current form. The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged (and hence why it is not being called GDDR6).
[...] The key improvement of the GDDR5X standard compared to the predecessor is its all-new 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. By contrast, the GDDR5 technology features 8n prefetch architecture and can read or write up to 256 bit (32 Bytes) of data per cycle. Doubled prefetch and increased data transfer rates are expected to double effective memory bandwidth of GDDR5X sub-systems. However, actual performance of graphics cards will depend not just on DRAM architecture and frequencies, but also on memory controllers and applications. Therefore, we will need to test actual hardware to find out actual real-world benefits of the new memory.
What purpose does GDDR5X serve if superior 1st and 2nd generation High Bandwidth Memory (HBM) are around? GDDR5X memory will be cheaper than HBM and its use is more of an evolutionary than revolutionary change from existing GDDR5-based hardware.
JEDEC has announced that it expects to finalize the DDR5 standard by next year. It says that DDR5 will double bandwidth and density, and increase power efficiency, presumably by lowering the operating voltages again (perhaps to 1.1 V). Availability of DDR5 modules is expected by 2020:
You may have just upgraded your computer to use DDR4 recently or you may still be using DDR3, but in either case, nothing stays new forever. JEDEC, the organization in charge of defining new standards for computer memory, says that it will be demoing the next-generation DDR5 standard in June of this year and finalizing the standard sometime in 2018. DDR5 promises double the memory bandwidth and density of DDR4, and JEDEC says it will also be more power-efficient, though the organization didn't release any specific numbers or targets.
The DDR4 SDRAM specification was finalized in 2012, and DDR3 in 2007, so DDR5's arrival is to be expected (cue the Soylentils still using DDR2). One way to double the memory bandwidth of DDR5 is to double the DRAM prefetch to 16n, matching GDDR5X.
Graphics cards are beginning to ship with GDDR5X. Some graphics cards and Knights Landing Xeon Phi chips include High Bandwidth Memory (HBM). A third generation of HBM will offer increased memory bandwidth, density, and more than 8 dies in a stack. Samsung has also talked about a cheaper version of HBM for consumers with a lower total bandwidth. SPARC64 XIfx chips include Hybrid Memory Cube. GDDR6 SDRAM could raise per-pin bandwidth to 14 Gbps, from the 10-14 Gbps of GDDR5X, while lowering power consumption.
In a surprising move, SK Hynix has announced its first memory chips based on the yet-unpublished GDDR6 standard. The new DRAM devices for video cards have capacity of 8 Gb and run at 16 Gbps per pin data rate, which is significantly higher than both standard GDDR5 and Micron's unique GDDR5X format. SK Hynix plans to produce its GDDR6 ICs in volume by early 2018.
GDDR5 memory has been used for top-of-the-range video cards for over seven years, since summer 2008 to present. Throughout its active lifespan, GDDR5 increased its data rate by over two times, from 3.6 Gbps to 9 Gbps, whereas its per chip capacities increased by 16 times from 512 Mb to 8 Gb. In fact, numerous high-end graphics cards, such as NVIDIA's GeForce GTX 1060 and 1070, still rely on the GDDR5 technology, which is not going anywhere even after the launch of Micron's GDDR5X with up to 12 Gbps data rate per pin in 2016. As it appears, GDDR6 will be used for high-end graphics cards starting in 2018, just two years after the introduction of GDDR5X.
Samsung's second generation ("1y-nm") 8 Gb DDR4 DRAM dies are being mass produced:
Samsung late on Wednesday said that it had initiated mass production of DDR4 memory chips using its second generation '10 nm-class' fabrication process. The new manufacturing technology shrinks die size of the new DRAM chips and improves their performance as well as energy efficiency. To do that, the process uses new circuit designs featuring air spacers (for the first time in DRAM industry). The new DRAM ICs (integrated circuits) can operate at 3600 Mbit/s per pin data rate (DDR4-3600) at standard DDR4 voltages and have been validated with major CPU manufacturers already.
[...] Samsung's new DDR4 chip produced using the company's 1y nm fabrication process has an 8-gigabit capacity and supports 3600 MT/s data transfer rate at 1.2 V. The new D-die DRAM runs 12.5% faster than its direct predecessor (known as Samsung C-die, rated for 3200 MT/s) and is claimed to be up to 15% more energy efficient as well. In addition, the latest 8Gb DDR4 ICs use a new in-cell data sensing system that offers a more accurate determination of the data stored in each cell and which helps to increase the level of integration (i.e., make cells smaller) and therefore shrink die size.
Samsung says that the new 8Gb DDR4 chips feature an "approximate 30% productivity gain" when compared to similar chips made using the 1x nm manufacturing tech.
UPDATE 12/21: Samsung clarified that productivity gain means increase in the number of chips per wafer. Since capacity of Samsung's C-die and D-die is the same, the increase in the number of dies equals the increase in the number of bits per wafer. Therefore, the key takeaway from the announcement is that the 1y nm technology and the new in-cell data sensing system enable Samsung to shrink die size and fit more DRAM dies on a single 300-mm wafer. Meanwhile, the overall 30% productivity gain results in lower per-die costs at the same yield and cycle time (this does not mean that the IC costs are 30% lower though) and increases DRAM bit output.
Also at Tom's Hardware.
Previously: Samsung Announces "10nm-Class" 8 Gb DRAM Chips
Related: Samsung Announces 12Gb LPDDR4 DRAM, Could Enable Smartphones With 6 GB of RAM
Samsung Announces 8 GB DRAM Package for Mobile Devices
Samsung's 10nm Chips in Mass Production, "6nm" on the Roadmap
Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks
IC Insights Predicts Additional 40% Increase in DRAM Prices
Cadence this week introduced the industry's first IP interface in silicon for the current provisional DDR5 specification developed by JEDEC. Cadence's IP and test chip [are] fabricated using TSMC's 7 nm process technology, and is designed to enable SoC developers to begin on their DDR5 memory subsystems now and get them to market in 2019-2020, depending on high-volume DDR5 availability. At a special event, Cadence teamed up with Micron to demonstrate their DDR5 DRAM subsystem. In the meantime, Micron has started to sample its preliminary DDR5 chips to interested parties.
Cadence's DDR5 memory controller and PHY achieve a 4400 MT/s data rate with CL42 using Micron's prototype 8 Gb DDR5 memory chips. Compared to DDR4 today, the supply voltage of DDR5 is dropped from 1.2 volts to 1.1 volts, with an allowable fluctuation range of only ±0.033 V. In this case, the specifications mean that an 8 Gb DDR5 DRAM chip can hit a considerably higher I/O speed than an 8 Gb commercial DDR4 IC today at a ~9% lower voltage. JEDEC plans that eventually the DDR5 interface will get to 6400 MT/s, but Cadence says that initial DDR5 memory ICs will support ~4400 MT/s data rates. This will be akin to DDR4 rising from DDR4-2133 at initial launch to DDR4-3200 today. Cadence's DDR5 demo video can be watched here.
It would seem that Micron this morning has accidentally spilled the beans on the future of graphics card memory technologies – and outed one of NVIDIA's next-generation RTX video cards in the process. In a technical brief that was posted to their website, dubbed "The Demand for Ultra-Bandwidth Solutions", Micron detailed their portfolio of high-bandwidth memory technologies and the market needs for them. Included in this brief was information on the previously-unannounced GDDR6X memory technology, as well as some information on what seems to be the first card to use it, NVIDIA's GeForce RTX 3090.
[...] At any rate, as this is a market overview rather than a technical deep dive, the details on GDDR6X are slim. The document links to another, still-unpublished document, "Doubling I/O Performance with PAM4: Micron Innovates GDDR6X to Accelerate Graphics Memory", that would presumably contain further details on GDDR6X. None the less, even this high-level overview gives us a basic idea of what Micron has in store for later this year.
The key innovation for GDDR6X appears to be that Micron is moving from using POD135 coding on the memory bus – a binary (two state) coding format – to four state coding in the form of Pulse-Amplitude Modulation 4 (PAM4). In short, Micron would be doubling the number of signal states in the GDDR6X memory bus, allowing it to transmit twice as much data per clock.
[...] According to Micron's brief, they're expecting to get GDDR6X to 21Gbps/pin, at least to start with. This is a far cry from doubling GDDR6's existing 16Gbps/pin rate, but it's also a data rate that would be grounded in the limitations of PAM4 and DRAM. PAM4 itself is easier to achieve than binary coding at the same total data rate, but having to accurately determine four states instead of two is conversely a harder task. So a smaller jump isn't too surprising.
The leaked Ampere-based RTX 3090 seems to be Nvidia's attempt to compete with AMD's upcoming RDNA2 ("Big Navi") GPUs without lowering the price of the usual high-end "Titan" GPU (Titan RTX launched at $2,499). Here are some of the latest leaks for the RTX 30 "Ampere" GPU lineup.
Related: PCIe 6.0 Announced for 2021: Doubles Bandwidth Yet Again (uses PAM4)