Samsung's New HBM2 Memory Thinks for Itself: 1.2 TFLOPS of Embedded Processing Power
Today, Samsung announced that its new HBM2-based memory has an integrated AI processor that can push out (up to) 1.2 TFLOPS of embedded computing power, allowing the memory chip itself to perform operations that are usually reserved for CPUs, GPUs, ASICs, or FPGAs.
The new HBM-PIM (processing-in-memory) chips inject an AI engine inside each memory bank, thus offloading processing operations to the HBM itself. The new class of memory is designed to alleviate the burden of moving data between memory and processors, which is often more expensive in terms of power consumption and time than the actual compute operations.
[...] As with most in-memory processing techniques, we expect this tech will press the boundaries of the memory chips' cooling limitations, especially given that HBM chips are typically deployed in stacks that aren't exactly conducive to easy cooling. Samsung's presentation did not cover how HBM-PIM addresses those challenges.
HBM: High Bandwidth Memory.
ASIC: Application-Specific Integrated Circuit.
FPGA: Field-Programmable Gate Array.
(Score: 2) by takyon on Thursday February 18 2021, @12:59AM (9 children)
It sounds more mundane and awful than that. This is not some neuromorphic memristor thing, but it does move an AI accelerator closer to memory.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Touché) by Arik on Thursday February 18 2021, @01:12AM (7 children)
Could you translate that to something meaningful?
If laughter is the best medicine, who are the best doctors?
(Score: 2) by takyon on Thursday February 18 2021, @01:26AM
Machine learning accelerator.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 1, Funny) by Anonymous Coward on Thursday February 18 2021, @01:35AM
It begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.
(Score: 1, Informative) by Anonymous Coward on Thursday February 18 2021, @05:07AM (4 children)
The modules are FIMDRAMs that have a built-in PCU that is capable of performing FP16 SIMD operations at 300MHz with bank-level parallelism.
(Score: 0) by Anonymous Coward on Thursday February 18 2021, @05:32AM (1 child)
Sheesh, no need for all the techie jargon, and acronyms and shite, that just confuses matters.
How about you just say it accelerates the operations that are most commonly used in AI-related computations? Or an "AI accelerator" for short?
(Score: 2, Insightful) by Arik on Thursday February 18 2021, @10:27AM
If laughter is the best medicine, who are the best doctors?
(Score: 2) by Arik on Thursday February 18 2021, @10:25AM (1 child)
If laughter is the best medicine, who are the best doctors?
(Score: 0) by Anonymous Coward on Thursday February 18 2021, @10:52AM
You're welcome. Anyone interested can feel free to ask if they need any of the alphabet soup explained. Hopefully after at least attempting to read the Wikipedia articles so I don't have to start all the way at zero.
(Score: 0) by Anonymous Coward on Thursday February 18 2021, @01:35AM
"It sounds more mundane"
It kinda does. Basically a smaller computing node.