Samsung's New HBM2 Memory Thinks for Itself: 1.2 TFLOPS of Embedded Processing Power
Today, Samsung announced that its new HBM2-based memory has an integrated AI processor that can push out (up to) 1.2 TFLOPS of embedded computing power, allowing the memory chip itself to perform operations that are usually reserved for CPUs, GPUs, ASICs, or FPGAs.
The new HBM-PIM (processing-in-memory) chips inject an AI engine inside each memory bank, thus offloading processing operations to the HBM itself. The new class of memory is designed to alleviate the burden of moving data between memory and processors, which is often more expensive in terms of power consumption and time than the actual compute operations.
[...] As with most in-memory processing techniques, we expect this tech will press the boundaries of the memory chips' cooling limitations, especially given that HBM chips are typically deployed in stacks that aren't exactly conducive to easy cooling. Samsung's presentation did not cover how HBM-PIM addresses those challenges.
HBM: High Bandwidth Memory.
ASIC: Application-Specific Integrated Circuit.
FPGA: Field-Programmable Gate Array.
(Score: 2) by legont on Thursday February 18 2021, @02:57AM
Now memory itself can decide if it does or does not want to share it's, well, memory with the requester. I imagine AI of the particular memory bit checking with bio-metrics of the user before responding to the request.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.