SoylentNews
SoylentNews is people
https://soylentnews.org/

Title    IBM Reduces Neural Network Energy Consumption Using Analog Memory and Non-Von Neumann Architecture
Date    Thursday June 21 2018, @12:46AM
Author    Fnord666
Topic   
from the approaching-the-singularity dept.
https://soylentnews.org/article.pl?sid=18/06/20/226213

takyon writes:

IBM researchers use analog memory to train deep neural networks faster and more efficiently

Deep neural networks normally require fast, powerful graphical processing unit (GPU) hardware accelerators to support the needed high speed and computational accuracy — such as the GPU devices used in the just-announced Summit supercomputer. But GPUs are highly energy-intensive, making their use expensive and limiting their future growth, the researchers explain in a recent paper published in Nature.

Instead, the IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power — without sacrificing the ability to create deep learning systems.

The trick was to replace conventional von Neumann architecture, which is "constrained by the time and energy spent moving data back and forth between the memory and the processor (the 'von Neumann bottleneck')," the researchers explain in the paper. "By contrast, in a non-von Neumann scheme, computing is done at the location of the data [in memory], with the strengths of the synaptic connections (the 'weights') stored and adjusted directly in memory.

Equivalent-accuracy accelerated neural-network training using analogue memory (DOI: 10.1038/s41586-018-0180-5) (DX)


Original Submission

Links

  1. "takyon" - https://soylentnews.org/~takyon/
  2. "IBM researchers use analog memory to train deep neural networks faster and more efficiently" - http://www.kurzweilai.net/ibm-researchers-use-analog-memory-to-train-deep-neural-networks-faster-and-more-efficiently
  3. "Equivalent-accuracy accelerated neural-network training using analogue memory" - https://www.nature.com/articles/s41586-018-0180-5
  4. "DX" - https://dx.doi.org/10.1038/s41586-018-0180-5
  5. "Original Submission" - https://soylentnews.org/submit.pl?op=viewsub&subid=27422

© Copyright 2024 - SoylentNews, All Rights Reserved

printed from SoylentNews, IBM Reduces Neural Network Energy Consumption Using Analog Memory and Non-Von Neumann Architecture on 2024-04-19 01:53:15