Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.
posted by Fnord666 on Wednesday December 19 2018, @10:58PM   Printer-friendly
from the like-pancakes dept.

JEDEC Updates Groundbreaking High Bandwidth Memory (HBM) Standard

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of an update to JESD235 High Bandwidth Memory (HBM) DRAM standard.

[...] JEDEC standard JESD235B for HBM leverages Wide I/O and TSV technologies to support densities up to 24 GB per device at speeds up to 307 GB/s. This bandwidth is delivered across a 1024-bit wide device interface that is divided into 8 independent channels on each DRAM stack. The standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB – 24 GB per stack.

This update extends the per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, and updates the MISR polynomial options for these new configurations.

Some existing High Bandwidth Memory products already had a per pin bandwidth of 2.4 Gbps. However, the increase in stack size and density could allow a product with 96 GB of DRAM using just four stacks (16 Gb DRAM × 12 × 4), up from 32 GB (8 Gb DRAM × 8 × 4).

This update apparently applies to HBM2 and is not considered a third or fourth generation of HBM.

Also at Wccftech and AnandTech.

Previously: Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks


Original Submission

Related Stories

Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks 1 comment

In response to increased demand, Samsung is increasing production of the densest HBM2 DRAM available:

Samsung on Tuesday announced that it is increasing production volumes of its 8 GB, 8-Hi HBM2 DRAM stacks due to growing demand. In the coming months the company's 8 GB HBM2 chips will be used for several applications, including those for consumers, professionals, AI, as well as for parallel computing. Meanwhile, AMD's Radeon Vega graphics cards for professionals and gamers will likely be the largest consumers of HBM2 in terms of volume. And while AMD is traditionally a SK Hynix customer, the timing of this announcement with AMD's launches certainly suggests that AMD is likely a Samsung customer this round as well.

Samsung's 8 GB HBM Gen 2 memory KGSDs (known good stacked die) are based on eight 8-Gb DRAM devices in an 8-Hi stack configuration. The memory components are interconnected using TSVs and feature over 5,000 TSV interconnects each. Every KGSD has a 1024-bit bus and offers up to 2 Gbps data rate per pin, thus providing up to 256 GB/s of memory bandwidth per single 8-Hi stack. The company did not disclose power consumption and heat dissipation of its HBM memory components, but we have reached out [to] Samsung for additional details.

Previously:
Samsung Announces Mass Production of HBM2 DRAM
CES 2017: AMD Vega GPUs and FreeSync 2
AMD Launches the Radeon Vega Frontier Edition


Original Submission

Samsung Announces "Flashbolt" HBM2E (High Bandwidth Memory) DRAM packages 7 comments

Samsung HBM2E 'Flashbolt' Memory for GPUs: 16 GB Per Stack, 3.2 Gbps

Samsung has introduced the industry's first memory that correspond to the HBM2E specification. The company's new Flashbolt memory stacks increase performance by 33% and offer double per-die as well as double per-package capacity. Samsung introduced its HBM2E DRAMs at GTC, indicating that the gaming market is a target market for this memory.

Samsung's Flashbolt KGSDs (known good stacked die) are based on eight 16-Gb memory dies interconnected using TSVs (through silicon vias) in an 8-Hi stack configuration. Every Flashbolt package features a 1024-bit bus with a 3.2 Gbps data transfer speed per pin, thus offering up to 410 GB/s of bandwidth per KGSD.

Samsung positions its Flashbolt KGSDs for next-gen datacenter, HPC, AI/ML, and graphics applications. By using four Flashbolt stacks with a processor featuring a 4096-bit memory interface, developers can get 64 GB of memory with a 1.64 TB/s peak bandwidth, something that will be a great advantage for capacity and bandwidth-hungry chips. With two KGSDs they get 32 GB of DRAM with an 820 GB/s peak bandwidth.

Also at Tom's Hardware.

Previously: Samsung Increases Production of 8 GB High Bandwidth Memory 2.0 Stacks
JEDEC Updates High Bandwidth Memory Standard With New 12-Hi Stacks


Original Submission

SK Hynix Announces HBM2E Memory for 2020 Release 2 comments

[HBM is High Bandwidth Memory. -Ed.]

SK Hynix Announces 3.6 Gbps HBM2E Memory For 2020: 1.8 TB/sec For Next-Gen Accelerators

SK Hynix this morning has thrown their hat into the ring as the second company to announce memory based on the HBM2E standard. While the company isn't using any kind of flash name for the memory (ala Samsung's Flashbolt), the idea is the same: releasing faster and higher density HBM2 memory for the next generation of high-end processors. Hynix's HBM2E memory will reach up to 3.6 Gbps, which as things currently stand, will make it the fastest HBM2E memory on the market when it ships in 2020.

As a quick refresher, HBM2E is a small update to the HBM2 standard to improve its performance, serving as a mid-generational kicker of sorts to allow for higher clockspeeds, higher densities (up to 24GB with 12 layers), and the underlying changes that are required to make those happen. Samsung was the first memory vendor to announce HBM2E memory earlier this year, with their 16GB/stack Flashbolt memory, which runs at up to 3.2 Gbps. At the time, Samsung did not announce a release date, and to the best of our knowledge, mass production still hasn't begun.

[...] [SK Hynix's] capacity is doubling, from 8 Gb/layer to 16 Gb/layer, allowing a full 8-Hi stack to reach a total of 16GB. It's worth noting that the revised HBM2 standard actually allows for 12-Hi stacks, for a total of 24GB/stack, however we've yet to see anyone announce memory quite that dense.

See also: HBM2E: The E Stands For Evolutionary

Previously: JEDEC Updates High Bandwidth Memory Standard With New 12-Hi Stacks
Samsung Announces "Flashbolt" HBM2E (High Bandwidth Memory) DRAM packages


Original Submission

Samsung Develops 12-Layer 3D TSV DRAM 5 comments

Samsung has developed the first 12-layer High Bandwidth Memory stacks:

Samsung's 12-layer DRAM KGSDs (known good stack die) will feature 60,000 [through silicon via (TSV)] holes which is why the manufacturer considers its technology one of the most challenging packaging for mass production. Despite increase of the number of layers from eight to 12, thickness of the package will remain at 720 microns, so Samsung's partners will not have to change anything on their side to use the new technology. It does mean that we're seeing DRAM layers getting thinner, with acceptable yields for high-end products.

One of the first products to use Samsung's 12-layer DRAM packaging technology will be the company's 24 GB HBM2 KGSDs that will be mass produced shortly. These devices will allow developers of CPUs, GPUs, and FPGAs to install 48 GB or 96 GB of memory in case of 2048 or 4096-bit buses, respectively. It also allows for 12 GB and 6 GB stacks with less dense configurations.

"12-Hi" stacks were added to the HBM2 standard back in December, but there were no immediate plans by Samsung or SK Hynix to manufacture it.

Future AMD CPUs (particularly Epyc) may feature HBM stacks somewhere on the CPU die. Intel has already used its embedded multi-die interconnect bridge (EMIB) technology with HBM to create an advanced APU with AMD's own graphics, and is using HBM on field programmable gate arrays (FPGAs) and other products.

AMD's Radeon VII GPU has 16 GB of HBM2. Nvidia's V100 GPU has 16 or 32 GB on a 4,096-bit memory bus.

Also at Electronics Weekly.


Original Submission

High Bandwidth Memory Could Increase to 16 Layers, and More 7 comments

SK Hynix has licensed technology that could enable the production of 16-layer High Bandwidth Memory (HBM) stacks. Bandwidth could also be increased by a superior interconnect density:

SK Hynix has inked a new broad patent and technology licensing agreement with Xperi Corp. Among other things, the company licensed the DBI Ultra 2.5D/3D interconnect technology developed by Invensas. The latter was designed to enable building up to 16-Hi chip assemblies, including next-generation memory, and highly-integrated SoCs that feature numerous homogeneous layers.

Invensas' DBI Ultra is a proprietary die-to-wafer hybrid bonding interconnect technology that supports from 100,000 to 1,000,000 interconnects per mm2, using interconnect pitches as small as 1 µm. According to the company, the much greater number of interconnects can offer dramatically increased bandwidth vs. conventional copper pillar interconnect technology, which only goes as high as 625 interconnects per mm2. The small interconnects also offer a shorter z-height, making it possible to build a stacked chip with 16 layers in the same space as conventional 8-Hi chips, allowing for greater memory densities.

12-Hi stacks have been specified, but have only recently reached development/production.

JEDEC (Joint Electron Device Engineering Council) has updated the HBM2 standard to accommodate 3.2Gbps/pin speeds. This is in line with Samsung's "Flashbolt" HBM2E memory (although SK Hynix and Samsung may push speeds to a further 3.6Gbps or 4.2Gbps/pin), which will enter into mass production soon. JEDEC has not adopted the "HBM2E" nomenclature used by Samsung, SK Hynix, and others.

Micron has announced that it is shipping LPDDR5 DRAM to customers. LPDDR5 will be used in the Xiaomi Mi 10, ZTE Axon 10s Pro 5G, and Samsung Galaxy S20 (yes, even 16 GB of it).


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by takyon on Thursday December 20 2018, @03:35AM

    by takyon (881) <{takyon} {at} {soylentnews.org}> on Thursday December 20 2018, @03:35AM (#776653) Journal

    Did I not clickbait enough?

    Anyway, AnandTech came out with their story so let's see what they have to say:

    On the capacity front, the new version of the specification, JESD235B, had added support for 12-Hi chip stacks. With 4 more layers than the previous limit of 8-Hi stacks, this will allow memory manufacturers to produce 12 GB stacks at current densities, and 24 GB stacks in the future when 16 Gb layers become available. Though it's worth noting that while 12-Hi stacks are now part of the HBM specification, the group still lists the physical dimensions of a 12-Hi KGSD (known good stacked die) as "TBD", so it's not immediately clear right now whether 12-Hi stacks will follow the same 720μm typical/745μm maximum stack height rules as the current 2/4/8-Hi configurations. Otherwise the configuration of the stacks themselves are unchanged; the new KGSDs will continue to feature up to eight 128-bit channels as well as a 1024-bit physical interface.

    [...] All told, the updated specification means that a fully built-out 4096-bit HBM memory subsystem following the JESD235B spec can now contain 96 GB of memory with a peak bandwidth of 1.228 TB/s.

    Mr. Shilov goes on to speculate that JEDEC wouldn't have published this update unless a manufacturer (Samsung, SK Hynix, etc.) wasn't already working on 12-Hi stacks.

    It would be nice to see HBM added to more APUs and SoCs (which pretty much all CPUs are nowadays). Expensive, sure, but it could be worth the investment. And it's great to see progress in vertical technologies like this one. Just how high can they stack it?

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Thursday December 20 2018, @03:37AM (1 child)

    by Anonymous Coward on Thursday December 20 2018, @03:37AM (#776654)

    I have HBM1. How do I become more envious? Most people are not envious and want to become more envious.

    • (Score: 3, Funny) by takyon on Thursday December 20 2018, @03:47AM

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Thursday December 20 2018, @03:47AM (#776661) Journal

      How do I become more envious?

      1. Go to YouTube.
      2. Search "RTX 2080 Ti". Wait a second, they put GDDR6 on it.
      3. Search Google Duck.com for "HBM Nvidia".
      4. Go back to GooTube and search for "Titan RTX".
      5. Wait for AMD Navi.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(1)