Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Wednesday April 08 2015, @05:53AM   Printer-friendly
from the thinking-about-thinking dept.

The grant will fund research into the potential of neuromorphic computing in next-generation supercomputers. The researchers will use their own AMOS supercomputer to simulate various designs for hybrid supercomputers that incorporate classical and neuromorphic processors.

HPCwire separately published this analysis of the project which credits IBM's TrueNorth chip for sparking significant interest in the field of neuromorphic computing. Unveiled last year, TrueNorth currently integrates 5.4 billion transistors and 4,096 cores into a 28nm-process chip with a power consumption of just 70 mW, and is capable of simulating "one million individually programmable neurons". Hybrid supercomputing could mirror the recent trend towards mixed computing systems, in which CPUs are paired with general-purpose GPUs and coprocessors such as Intel's Xeon Phi.

From the HPCWire announcement article:

"The question we're asking is: What if future supercomputer designs were to have several embedded neuromorphic processors?" said Christopher Carothers, director of the Center for Computational Innovations, in the official announcement. "How would you design that computer? And what new capabilities would it offer?"

Neuromorphic computing is built on a computational model patterned on the human brain, incorporating an interconnected network of nodes or “neurons” that make it possible to encode information far more efficiently than classic computer chips. Computers that incorporate a neuromorphic approach excel at pattern recognition, with far less energy usage (and heat) than conventional chips, and have the potential to overcome looming barriers to increased computing speed.

Although computer scientists have used algorithms to approximate neuromorphic computing (an approach commonly called a “neural net”), IBM only recently built this first neuromorphic chip as part of a DARPA-funded research effort. The Rensselaer researchers will base their work on the specifications of IBM’s “True North” neuromorphic processor and simulation development kit.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Immerman on Wednesday April 08 2015, @02:59PM

    by Immerman (3985) on Wednesday April 08 2015, @02:59PM (#167864)

    > pipelines composed of smaller pipelines

    Is it though? Unless you're considering a pipeline to be the internal processes of a single neuron, that seems like a gross oversimplification. As I understand it a computing pipeline is typically something almost entirely sequential and self-contained with minimal interaction with other pipelines, whereas neurons are so interconnected that "parallel processing" hardly begins to cover it, with connection cycles being the rule rather than the exception, unlike many/most "neural net" architectures. It may even be that the standing waves enabled by such cycles are integral to the basic functionality.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by tibman on Wednesday April 08 2015, @06:46PM

    by tibman (134) Subscriber Badge on Wednesday April 08 2015, @06:46PM (#167940)

    It gets even harder when simulating a network. The computer is doing serial operations over the parallel network. Each neuron is processed sequentially. Should a neuron later in the list be able to see the output from a neuron earlier in the list? or should a later neuron only see the outputs from the previous iteration? So you randomize the list each iteration or whatever you want your evaluation strategy to be. Then you add in things like not all neurons evaluate at the same speeds, ouch.

    Neural networks really are an unscheduled massively paralleled architecture. Simulating it with a scheduled sequential process is very prone to failure. Especially when most concerns are with inputs and outputs. As far as the network is concerned it doesn't care. It could sit and have internal dialog all day and be happy. But we drive it towards a situation where different inputs immediately result in different outputs. Which is an overly simplified and broken version of actual brain networks.

    --
    SN won't survive on lurkers alone. Write comments.